Wanrong Zhu
Cited by
Cited by
Text Infilling
W Zhu, Z Hu, E Xing
arXiv preprint arXiv:1901.00158, 2019
Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation
Z Hu, H Shi, B Tan, W Wang, Z Yang, T Zhao, J He, L Qin, D Wang, X Ma, ...
ACL 2019: System Demonstration, 159–164, 2019
Diagnosing Vision-and-Language Navigation: What Really Matters
W Zhu, Y Qi, P Narayana, K Sone, S Basu, XE Wang, Q Wu, M Eckstein, ...
NAACL 2022, 5981–5993, 2021
Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation
W Zhu, XE Wang, TJ Fu, A Yan, P Narayana, K Sone, S Basu, WY Wang
EACL 2021, 1207–1221, 2020
Imagination-Augmented Natural Language Understanding
Y Lu, W Zhu, XE Wang, M Eckstein, WY Wang
NAACL 2022, 4392–4402, 2022
ImaginE: An Imagination-based Automatic Evaluation Metric for Natural Language Generation
W Zhu, XE Wang, A Yan, M Eckstein, WY Wang
EACL 2023, 2021
Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
W Zhu, XE Wang, P Narayana, K Sone, S Basu, WY Wang
EMNLP 2020, 8806–8811, 2020
Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning
X Wang, W Zhu, WY Wang
arXiv preprint arXiv:2301.11916, 2023
Visualize Before You Write: Imagination-Guided Open-Ended Text Generation
W Zhu, A Yan, Y Lu, W Xu, XE Wang, M Eckstein, WY Wang
EACL 2023, 2022
Neuro-Symbolic Causal Language Planning with Commonsense Prompting
Y Lu, W Feng, W Zhu, W Xu, XE Wang, M Eckstein, WY Wang
ICLR 2023, 2022
End-to-end Dense Video Captioning as Sequence Generation
W Zhu, B Pang, A Thapliyal, WY Wang, R Soricut
COLING 2022, 5651–5665, 2022
CLIP also Understands Text: Prompting CLIP for Phrase Understanding
A Yan, J Li, W Zhu, Y Lu, WY Wang, J McAuley
arXiv preprint arXiv:2210.05836, 2022
The system can't perform the operation now. Try again later.
Articles 1–12