Junyu Zhang
Junyu Zhang
Industrial Systems Engineering and Management, National University of Singapore
Verified email at - Homepage
Cited by
Cited by
Highly accurate model for prediction of lung nodule malignancy with CT scans
JL Causey, J Zhang, S Ma, B Jiang, JA Qualls, DG Politte, F Prior, ...
Scientific reports 8 (1), 9286, 2018
On lower iteration complexity bounds for the convex concave saddle point problems
J Zhang, M Hong, S Zhang
Mathematical Programming 194 (1-2), 901-935, 2022
Variational policy gradient method for reinforcement learning with general utilities
J Zhang, A Koppel, AS Bedi, C Szepesvari, M Wang
Advances in Neural Information Processing Systems 33, 4572--4583, 2020
A stochastic composite gradient method with incremental variance reduction
J Zhang, L Xiao
Advances in Neural Information Processing Systems 32, 2019
Multilevel composite stochastic optimization via nested variance reduction
J Zhang, L Xiao
SIAM Journal on Optimization 31 (2), 1131-1157, 2021
On the convergence and sample efficiency of variance-reduced policy gradient method
J Zhang, C Ni, Z Yu, C Szepesvari, M Wang
Advances in Neural Information Processing Systems 34, 2228-2240, 2021
Primal-Dual Optimization Algorithms over Riemannian Manifolds: an Iteration Complexity Analysis
J Zhang, S Ma, S Zhang
Mathematical Programming. 184, 445–490, 2019
From low probability to high confidence in stochastic convex optimization
D Davis, D Drusvyatskiy, L Xiao, J Zhang
The Journal of Machine Learning Research 22 (1), 2237-2274, 2021
A composite randomized incremental gradient method
J Zhang, L Xiao
International Conference on Machine Learning, 7454-7462, 2019
A cubic regularized newton's method over riemannian manifolds
J Zhang, S Zhang
arXiv preprint arXiv:1805.05565, 2018
Cautious Reinforcement Learning via Distributional Risk in the Dual Domain
J Zhang, AS Bedi, M Wang, A Koppel
IEEE Journal on Selected Areas in Information Theory, 2021
Adaptive stochastic variance reduction for subsampled Newton method with cubic regularization
J Zhang, L Xiao, S Zhang
INFORMS Journal on Optimization 4 (1), 45-64, 2022
Generalization bounds for stochastic saddle point problems
J Zhang, M Hong, M Wang, S Zhang
International Conference on Artificial Intelligence and Statistics, 568-576, 2021
FFT-based gradient sparsification for the distributed training of deep neural networks
L Wang, W Wu, J Zhang, H Liu, G Bosilca, M Herlihy, R Fonseca
Proceedings of the 29th International Symposium on High-Performance Parallel …, 2020
Cubic regularized Newton method for the saddle point models: A global and local convergence analysis
K Huang, J Zhang, S Zhang
Journal of Scientific Computing 91 (2), 60, 2022
Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization
J Zhang, L Xiao
Mathematical Programming, 1-43, 2021
A sparse completely positive relaxation of the modularity maximization for community detection
J Zhang, H Liu, Z Wen, S Zhang
SIAM Journal on Scientific Computing 40 (5), A3091-A3120, 2018
Marl with general utilities via decentralized shadow reward actor-critic
J Zhang, AS Bedi, M Wang, A Koppel
arXiv preprint arXiv:2106.00543, 2021
Subspace methods with local refinements for eigenvalue computation using low-rank tensor-train format
J Zhang, Z Wen, Y Zhang
Journal of Scientific Computing 70, 478-499, 2017
On the sample complexity and metastability of heavy-tailed policy search in continuous control
AS Bedi, A Parayil, J Zhang, M Wang, A Koppel
arXiv preprint arXiv:2106.08414, 2021
The system can't perform the operation now. Try again later.
Articles 1–20