Follow
Haibin Lin
Haibin Lin
Bytedance
Verified email at bytedance.com - Homepage
Title
Cited by
Cited by
Year
Resnest: Split-attention networks.
H Zhang, C Wu, Z Zhang, Y Zhu, Z Zhang, H Lin, Y Sun, T He, J Mueller, ...
Conference on Computer Vision and Pattern (ECV), 2022
15822022
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
International Conference on Learning Representations, 2019
6802019
Self-Driving Database Management Systems.
A Pavlo, G Angulo, J Arulraj, H Lin, J Lin, L Ma, P Menon, TC Mowry, ...
CIDR 4, 1, 2017
3122017
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
J Guo, H He, T He, L Lausen, M Li, H Lin, X Shi, C Wang, J Xie, S Zha, ...
Journal of Machine Learning Research, 2019
2082019
Temporal-Contextual Recommendation in Real-Time
Y Ma, BM Narayanaswamy, H Lin, H Ding
KDD 2020, 2020
642020
Is Network the Bottleneck of Distributed Training?
Z Zhang, C Chang, H Lin, Y Wang, R Arora, X Jin
SIGCOMM NetAI, 2020
622020
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates
C Xie, O Koyejo, I Gupta, H Lin
NeurIPS 2020, optimizations for machine learning, 2019
422019
Resnest: Split-attention networks (2020)
H Zhang, C Wu, Z Zhang, Y Zhu, H Lin, Z Zhang, Y Sun, T He, J Mueller, ...
arXiv preprint arXiv:2004.08955, 2020
352020
CSER: Communication-efficient SGD with Error Reset
C Xie, S Zheng, OO Koyejo, I Gupta, M Li, H Lin
Advances in Neural Information Processing Systems 33, 2020
322020
Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
H Lin, H Zhang, Y Ma, T He, Z Zhang, S Zha, M Li
arXiv preprint arXiv:1904.12043, 2019
192019
Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes
S Zheng, H Lin, S Zha, M Li
arXiv preprint arXiv:2006.13484, 2020
172020
Compressed Communication for Distributed Training: Adaptive Methods and System
Y Zhong, C Xie, S Zheng, H Lin
arXiv preprint arXiv:2105.07829, 2021
72021
Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies
Z Wang, H Lin, Y Zhu, TSE Ng
Proceedings of the Eighteenth European Conference on Computer Systems, 867-882, 2023
62023
Deep graph library
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
62018
SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training
Y Chen, C Xie, M Ma, J Gu, Y Peng, H Lin, C Wu, Y Zhu
Advances in Neural Information Processing Systems 35, 17981-17993, 2022
42022
dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training
H Hu, C Jiang, Y Zhong, Y Peng, C Wu, Y Zhu, H Lin, C Guo
Proceedings of Machine Learning and Systems 4, 623-637, 2022
42022
Just-in-Time Dynamic-Batching
S Zha, Z Jiang, H Lin, Z Zhang
Conference on Neural Information Processing Systems, 2018
32018
Dive into Deep Learning for Natural Language Processing
H Lin, X Shi, L Lausen, A Zhang, H He, S Zha, A Smola
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
22019
LEMON: Lossless model expansion
Y Wang, J Su, H Lu, C Xie, T Liu, J Yuan, H Lin, R Sun, H Yang
arXiv preprint arXiv:2310.07999, 2023
12023
POSTER: LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization
J Zhao, B Wan, C Wu, Y Peng, H Lin
Proceedings of the 29th ACM SIGPLAN Annual Symposium on Principles and …, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20