Follow
Maksym Andriushchenko
Maksym Andriushchenko
PhD student at EPFL
Verified email at epfl.ch - Homepage
Title
Cited by
Cited by
Year
Square attack: a query-efficient black-box adversarial attack via random search
M Andriushchenko*, F Croce*, N Flammarion, M Hein
ECCV 2020, 2020
9462020
RobustBench: a standardized adversarial robustness benchmark
F Croce*, M Andriushchenko*, V Sehwag*, E Debenedetti*, N Flammarion, ...
NeurIPS 2021 Datasets and Benchmarks Track, Best Paper Honorable Mention …, 2021
6222021
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
M Hein, M Andriushchenko, J Bitterwolf
CVPR 2019 (oral), 2019
6042019
Formal guarantees on the robustness of a classifier against adversarial manipulation
M Hein, M Andriushchenko
NeurIPS 2017, 2017
5932017
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
M Mosbach, M Andriushchenko, D Klakow
ICLR 2021, 2021
3772021
Understanding and Improving Fast Adversarial Training
M Andriushchenko, N Flammarion
NeurIPS 2020, 2020
2982020
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
M Andriushchenko, M Hein
NeurIPS 2019, 2019
2802019
Provable Robustness of ReLU Networks via Maximization of Linear Regions
F Croce*, M Andriushchenko*, M Hein
AISTATS 2019, 2019
1822019
Towards Understanding Sharpness-Aware Minimization
M Andriushchenko, N Flammarion
ICML 2022, 2022
1162022
On the effectiveness of adversarial training against common corruptions
K Kireev*, M Andriushchenko*, N Flammarion
UAI 2022, 2021
982021
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
F Croce, M Andriushchenko, ND Singh, N Flammarion, M Hein
AAAI 2022, 2022
892022
Logit Pairing Methods Can Fool Gradient-Based Attacks
M Mosbach*, M Andriushchenko*, T Trost, M Hein, D Klakow
NeurIPS 2018 Workshop on Security in Machine Learning, 2018
782018
SGD with Large Step Sizes Learns Sparse Features
M Andriushchenko, A Varre, L Pillaud-Vivien, N Flammarion
ICML 2023, 2022
462022
A Modern Look at the Relationship between Sharpness and Generalization
M Andriushchenko, F Croce, M Müller, M Hein, N Flammarion
ICML 2023, 2023
402023
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
M Andriushchenko, F Croce, N Flammarion
ICML 2024 Workshop on the Next Generation of AI Safety, 2024
222024
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
P Chao*, E Debenedetti*, A Robey*, M Andriushchenko*, F Croce, ...
ICML 2024 Workshop on the Next Generation of AI Safety, 2024
212024
Sharpness-Aware Minimization Leads to Low-Rank Features
M Andriushchenko, D Bahri, H Mobahi, N Flammarion
NeurIPS 2023, 2023
202023
Why Do We Need Weight Decay in Modern Deep Learning?
M Andriushchenko*, F D'Angelo*, A Varre, N Flammarion
NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning, 2023
122023
Layerwise Linear Mode Connectivity
L Adilova, M Andriushchenko, M Kamp, A Fischer, M Jaggi
ICLR 2024, 2024
112024
#ScienceForUkraine: an Initiative to Support the Ukrainian Academic Community. “3 Months Since Russia’s Invasion in Ukraine”, February 26 – May 31, 2022
M Rose, S Reinsone, M Andriushchenko, M Bartosiak, A Bobak, L Drury, ...
Available at SSRN: https://ssrn.com/abstract=4139263, 2022
10*2022
The system can't perform the operation now. Try again later.
Articles 1–20