Follow
Shiqing Fan
Shiqing Fan
Verified email at nvidia.com - Homepage
Title
Cited by
Cited by
Year
DAPPLE: A pipelined data parallel approach for training large models
S Fan, Y Rong, C Meng, Z Cao, S Wang, Z Zheng, C Wu, G Long, J Yang, ...
Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of …, 2021
1542021
Efficient pipeline planning for expedited distributed dnn training
Z Luo, X Yi, G Long, S Fan, C Wu, J Yang, W Lin
IEEE INFOCOM 2022-IEEE Conference on Computer Communications, 340-349, 2022
112022
Auto-map: A DQN framework for exploring distributed execution plans for DNN workloads
S Wang, Y Rong, S Fan, Z Zheng, LS Diao, G Long, J Yang, X Liu, W Lin
arXiv preprint arXiv:2007.04069, 2020
82020
Parallelizing machine learning optimization algorithms on distributed data-parallel platforms with parameter server
R Gu, S Fan, Q Hu, C Yuan, Y Huang
2018 IEEE 24th International Conference on Parallel and Distributed Systems …, 2018
82018
Optimizing DNN compilation for distributed training with joint OP and tensor fusion
X Yi, S Zhang, L Diao, C Wu, Z Zheng, S Fan, S Wang, J Yang, W Lin
IEEE Transactions on Parallel and Distributed Systems 33 (12), 4694-4706, 2022
42022
iPLAR: Towards Interactive Programming with Parallel Linear Algebra in R
Z Wang, S Fan, R Gu, C Yuan, Y Huang
Algorithms and Architectures for Parallel Processing: 15th International …, 2015
12015
The system can't perform the operation now. Try again later.
Articles 1–6