Wei Zhang
Wei Zhang
IBM T.J.Watson Research Center
Verified email at us.ibm.com - Homepage
Cited by
Cited by
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent
X Lian, C Zhang, H Zhang, CJ Hsieh, W Zhang, J Liu
arXiv preprint arXiv:1705.09056, 2017
Automated atomicity-violation fixing
G Jin, L Song, W Zhang, S Lu, B Liblit
PLDI 2011, 389-400, 2011
Staleness-aware Async-SGD for Distributed Deep Learning
W Zhang, S Gupta, X Lian, J Liu
IJCAI 2016, 2016
Automated concurrency-bug fixing
G Jin, W Zhang, D Deng, B Liblit, S Lu
OSDI 2012, 221-236, 2012
Asynchronous decentralized parallel stochastic gradient descent
X Lian*, W Zhang*, C Zhang, J Liu
ICML 2018, 2017
ConSeq: detecting concurrency bugs through sequential errors
W Zhang, J Lim, R Olichandran, J Scherpelz, G Jin, S Lu, T Reps
ASPLOS 2011, 251-264, 2011
ConMem: detecting severe concurrency bugs through an effect-oriented approach
W Zhang, C Sun, S Lu
ASPLOS'10 45 (3), 179-192, 2010
Model accuracy and runtime tradeoff in distributed deep learning: A systematic study
S Gupta*, W Zhang*, F Wang
2016 IEEE 16th International Conference on Data Mining (ICDM), 171-180, 2016
Adacomp: Adaptive residual gradient compression for data-parallel distributed training
CY Chen, J Choi, D Brand, A Agrawal, W Zhang, K Gopalakrishnan
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
GLB: lifeline-based global load balancing library in x10
W Zhang, O Tardieu, D Grove, B Herta, T Kamada, V Saraswat, ...
Proceedings of the first workshop on Parallel programming for analytics …, 2014
ConAir: Featherweight concurrency bug recovery via single-threaded idempotent execution
W Zhang, M De Kruijf, A Li, S Lu, K Sankaralingam
Proceedings of the eighteenth international conference on Architectural …, 2013
Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks
X Sun, J Choi, CY Chen, N Wang, S Venkataramani, VV Srinivasan, X Cui, ...
Advances in neural information processing systems 32, 4900-4909, 2019
Model accuracy and runtime tradeoff in distributed deep learning
S Gupta, W Zhang, J Milthorpe
arXiv preprint arXiv:1509.04210, 2015
Efficient Concurrency-Bug Detection Across Inputs
D Deng, W Zhang, L Shan
Object-Oriented Programming, Systems, Languages & Applications, 2013
Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks
X Cui, Z Wei, Z Tuske, M Picheny
NIPS'2018, 2018
Conmem: Detecting crash-triggering concurrency bugs through an effect-oriented approach
W Zhang, C Sun, J Lim, S Lu, T Reps
ACM Transactions on Software Engineering and Methodology (TOSEM) 22 (2), 1-33, 2013
Towards better understanding of adaptive gradient algorithms in generative adversarial nets
M Liu, Y Mroueh, J Ross, W Zhang, X Cui, P Das, T Yang
arXiv preprint arXiv:1912.11940, 2019
Distributed Deep Learning Strategies For Automatic Speech Regcontion
W Zhang, X Cui, U Finkler, B Kingsbury, G Saon, D Kung, M Picheny
ICASSP'19, 2019
Fixing, preventing, and recovering from concurrency bugs
DD Deng, GL Jin, M de Kruijf, A Li, B Liblit, S Lu, SX Qi, JL Ren, ...
Science China Information Sciences 58 (5), 1-18, 2015
Gadei: On scale-up training as a service for deep learning
W Zhang, M Feng, Y Zheng, Y Ren, Y Wang, J Liu, P Liu, B Xiang, ...
2017 IEEE International Conference on Data Mining (ICDM), 1195-1200, 2017
The system can't perform the operation now. Try again later.
Articles 1–20