Follow
Haibin Lin
Haibin Lin
Bytedance
Verified email at bytedance.com - Homepage
Title
Cited by
Cited by
Year
Resnest: Split-attention networks.
H Zhang, C Wu, Z Zhang, Y Zhu, Z Zhang, H Lin, Y Sun, T He, J Mueller, ...
Conference on Computer Vision and Pattern (ECV), 2022
16452022
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
International Conference on Learning Representations, 2019
6932019
Self-Driving Database Management Systems.
A Pavlo, G Angulo, J Arulraj, H Lin, J Lin, L Ma, P Menon, TC Mowry, ...
CIDR 4, 1, 2017
3172017
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
J Guo, H He, T He, L Lausen, M Li, H Lin, X Shi, C Wang, J Xie, S Zha, ...
Journal of Machine Learning Research, 2019
2062019
Is Network the Bottleneck of Distributed Training?
Z Zhang, C Chang, H Lin, Y Wang, R Arora, X Jin
SIGCOMM NetAI, 2020
672020
Temporal-Contextual Recommendation in Real-Time
Y Ma, BM Narayanaswamy, H Lin, H Ding
KDD 2020, 2020
642020
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates
C Xie, O Koyejo, I Gupta, H Lin
NeurIPS 2020, optimizations for machine learning, 2019
432019
CSER: Communication-efficient SGD with Error Reset
C Xie, S Zheng, OO Koyejo, I Gupta, M Li, H Lin
Advances in Neural Information Processing Systems 33, 2020
342020
Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
H Lin, H Zhang, Y Ma, T He, Z Zhang, S Zha, M Li
arXiv preprint arXiv:1904.12043, 2019
202019
Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes
S Zheng, H Lin, S Zha, M Li
arXiv preprint arXiv:2006.13484, 2020
182020
Compressed Communication for Distributed Training: Adaptive Methods and System
Y Zhong, C Xie, S Zheng, H Lin
arXiv preprint arXiv:2105.07829, 2021
72021
Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies
Z Wang, H Lin, Y Zhu, TSE Ng
Proceedings of the Eighteenth European Conference on Computer Systems, 867-882, 2023
62023
Deep graph library
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
62018
SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training
Y Chen, C Xie, M Ma, J Gu, Y Peng, H Lin, C Wu, Y Zhu
Advances in Neural Information Processing Systems 35, 17981-17993, 2022
52022
dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training
H Hu, C Jiang, Y Zhong, Y Peng, C Wu, Y Zhu, H Lin, C Guo
Proceedings of Machine Learning and Systems 4, 623-637, 2022
52022
Just-in-Time Dynamic-Batching
S Zha, Z Jiang, H Lin, Z Zhang
Conference on Neural Information Processing Systems, 2018
32018
Resnest: Split-attention networks (2020)
H Zhang, C Wu, Z Zhang, Y Zhu, H Lin, Z Zhang, Y Sun, T He, J Mueller, ...
arXiv preprint arXiv:2004.08955, 2020
22020
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Z Jiang, H Lin, Y Zhong, Q Huang, Y Chen, Z Zhang, Y Peng, X Li, C Xie, ...
arXiv preprint arXiv:2402.15627, 2024
12024
CDMPP: A Device-Model Agnostic Framework for Latency Prediction of Tensor Programs
H Hu, J Su, J Zhao, Y Peng, Y Zhu, H Lin, C Wu
arXiv preprint arXiv:2311.09690, 2023
12023
LEMON: Lossless model expansion
Y Wang, J Su, H Lu, C Xie, T Liu, J Yuan, H Lin, R Sun, H Yang
arXiv preprint arXiv:2310.07999, 2023
12023
The system can't perform the operation now. Try again later.
Articles 1–20