Follow
Xiaohan  Chen
Xiaohan Chen
Electrical and Computer Engineering (DICE) , University of Texas at Austin
Verified email at utexas.edu - Homepage
Title
Cited by
Cited by
Year
Can we gain more from orthogonality regularizations in training deep networks?
N Bansal, X Chen, Z Wang
Advances in Neural Information Processing Systems 31, 2018
2062018
Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds
X Chen, J Liu, Z Wang, W Yin
Advances in Neural Information Processing Systems 31, 2018
1602018
Plug-and-play methods provably converge with properly trained denoisers
E Ryu, J Liu, S Wang, X Chen, Z Wang, W Yin
International Conference on Machine Learning, 5546-5557, 2019
1542019
Drawing early-bird tickets: Towards more efficient training of deep networks
H You, C Li, P Xu, Y Fu, Y Wang, X Chen, RG Baraniuk, Z Wang, Y Lin
arXiv preprint arXiv:1909.11957, 2019
1292019
ALISTA: Analytic weights are as good as learned weights in LISTA
J Liu, X Chen
International Conference on Learning Representations (ICLR), 2019
1202019
E2-train: Training state-of-the-art cnns with over 80% energy savings
Y Wang, Z Jiang, X Chen, P Xu, Y Zhao, Y Lin, Z Wang
Advances in Neural Information Processing Systems 32, 2019
532019
Learning to optimize: A primer and a benchmark
T Chen, X Chen, W Chen, H Heaton, J Liu, Z Wang, W Yin
arXiv preprint arXiv:2103.12828, 2021
412021
Earlybert: Efficient bert training via early-bird lottery tickets
X Chen, Y Cheng, S Wang, Z Gan, Z Wang, J Liu
arXiv preprint arXiv:2101.00063, 2020
322020
Shiftaddnet: A hardware-inspired deep network
H You, X Chen, Y Zhang, C Li, S Li, Z Liu, Z Wang, Y Lin
Advances in Neural Information Processing Systems 33, 2771-2783, 2020
322020
Smartexchange: Trading higher-cost memory storage/access for lower-cost computation
Y Zhao, X Chen, Y Wang, C Li, H You, Y Fu, Y Xie, Z Wang, Y Lin
2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture …, 2020
292020
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Neural Information Processing Systems 2021, 2021
202021
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
X Ma, G Yuan, X Shen, T Chen, X Chen, X Chen, N Liu, M Qin, S Liu, ...
Neural Information Processing Systems 2021, 2021
142021
Uncertainty quantification for deep context-aware mobile activity recognition and unknown context discovery
Z Huo, A PakBin, X Chen, N Hurley, Y Yuan, X Qian, Z Wang, S Huang, ...
International Conference on Artificial Intelligence and Statistics, 3894-3904, 2020
122020
The Elastic Lottery Ticket Hypothesis
X Chen, Y Cheng, S Wang, Z Gan, J Liu, Z Wang
Neural Information Processing Systems 2021, 2021
92021
Learning a minimax optimizer: A pilot study
J Shen, X Chen, H Heaton, T Chen, J Liu, W Yin, Z Wang
International Conference on Learning Representations, 2020
82020
Safeguarded learned convex optimization
H Heaton, X Chen, Z Wang, W Yin
arXiv preprint arXiv:2003.01880, 2020
72020
MATE: plugging in model awareness to task embedding for meta learning
X Chen, Z Wang, S Tang, K Muandet
Advances in Neural Information Processing Systems 33, 11865-11877, 2020
62020
The unreasonable effectiveness of random pruning: Return of the most naive baseline for sparse training
S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy
arXiv preprint arXiv:2202.02643, 2022
52022
Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
arXiv preprint arXiv:2106.14568, 2021
52021
Federated dynamic sparse training: Computing less, communicating less, yet learning better
S Bibikar, H Vikalo, Z Wang, X Chen
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6080-6088, 2022
42022
The system can't perform the operation now. Try again later.
Articles 1–20