Matthew Jagielski
Cited by
Cited by
Manipulating machine learning: Poisoning attacks and countermeasures for regression learning
M Jagielski, A Oprea, B Biggio, C Liu, C Nita-Rotaru, B Li
2018 IEEE Symposium on Security and Privacy (SP), 19-35, 2018
Extracting Training Data from Large Language Models.
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security Symposium 6, 2021
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
A Demontis, M Melis, M Pintor, M Jagielski, B Biggio, A Oprea, ...
28th {USENIX} Security Symposium ({USENIX} Security 19), 321-338, 2019
High accuracy and high fidelity extraction of neural networks
M Jagielski, N Carlini, D Berthelot, A Kurakin, N Papernot
Proceedings of the 29th USENIX Conference on Security Symposium, 1345-1362, 2020
Differentially private fair learning
M Jagielski, M Kearns, J Mao, A Oprea, A Roth, S Sharifi-Malvajerdi, ...
International Conference on Machine Learning, 3000-3008, 2019
Auditing differentially private machine learning: How private is private sgd?
M Jagielski, J Ullman, A Oprea
Advances in Neural Information Processing Systems 33, 22205-22216, 2020
Cryptanalytic extraction of neural network models
N Carlini, M Jagielski, I Mironov
Advances in Cryptology–CRYPTO 2020: 40th Annual International Cryptology …, 2020
Subpopulation data poisoning attacks
M Jagielski, G Severi, N Pousette Harger, A Oprea
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications …, 2021
Quantifying Memorization Across Neural Language Models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramer, C Zhang
arXiv preprint arXiv:2202.07646, 2022
Threat Detection for Collaborative Adaptive Cruise Control in Connected Cars
M Jagielski, N Jones, CW Lin, C Nita-Rotaru, S Shiraishi
Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and …, 2018
Secure communication channel establishment: TLS 1.3 (over TCP fast open) vs. QUIC
S Chen, S Jero, M Jagielski, A Boldyreva, C Nita-Rotaru
Computer Security–ESORICS 2019: 24th European Symposium on Research in …, 2019
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
F Tramèr, R Shokri, AS Joaquin, H Le, M Jagielski, S Hong, N Carlini
arXiv preprint arXiv:2204.00032, 2022
Counterfactual Memorization in Neural Language Models
C Zhang, D Ippolito, K Lee, M Jagielski, F Tramèr, N Carlini
arXiv preprint arXiv:2112.12938, 2021
Network and system level security in connected vehicle applications
H Liang, M Jagielski, B Zheng, CW Lin, E Kang, S Shiraishi, C Nita-Rotaru, ...
2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 1-7, 2018
Debugging Differential Privacy: A Case Study for Privacy Auditing
F Tramer, A Terzis, T Steinke, S Song, M Jagielski, N Carlini
arXiv preprint arXiv:2202.12219, 2022
Measuring Forgetting of Memorized Training Examples
M Jagielski, O Thakkar, F Tramèr, D Ippolito, K Lee, N Carlini, E Wallace, ...
arXiv preprint arXiv:2207.00099, 2022
The Privacy Onion Effect: Memorization is Relative
N Carlini, M Jagielski, N Papernot, A Terzis, F Tramer, C Zhang
arXiv preprint arXiv:2206.10469, 2022
How to Combine Membership-Inference Attacks on Multiple Updated Models
M Jagielski, S Wu, A Oprea, J Ullman, R Geambasu
arXiv preprint arXiv:2205.06369, 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
H Chaudhari, J Abascal, A Oprea, M Jagielski, F Tramèr, J Ullman
arXiv preprint arXiv:2208.12348, 2022
Adversarial Attacks and Countermeasures on Private Training in MPC
D Escudero, M Jagielski, R Rachuri, P Scholl
PPML@ NeurIPS, 2021
The system can't perform the operation now. Try again later.
Articles 1–20