Articles with public access mandates - Nicholas CarliniLearn more
Available somewhere: 17
Towards evaluating the robustness of neural networks
N Carlini, D Wagner
2017 IEEE Symposium on Security and Privacy (SP), 39-57, 2017
Mandates: Hewlett Foundation, US Department of Defense
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
A Athalye, N Carlini, D Wagner
ICML 2018, 2018
Mandates: US National Science Foundation, Hewlett Foundation
Adversarial examples are not easily detected: Bypassing ten detection methods
N Carlini, D Wagner
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security …, 2017
Mandates: Hewlett Foundation, US Department of Defense
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
Mandates: US National Science Foundation
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
N Carlini, C Liu, J Kos, Ú Erlingsson, D Song
Mandates: US National Science Foundation, Hewlett Foundation, US Department of Defense
Audio adversarial examples: Targeted attacks on speech-to-text
N Carlini, D Wagner
2018 IEEE Security and Privacy Workshops (SPW), 1-7, 2018
Mandates: US National Science Foundation, Hewlett Foundation
On adaptive attacks to adversarial example defenses
F Tramer, N Carlini, W Brendel, A Madry
Advances in Neural Information Processing Systems 33, 1633-1645, 2020
Mandates: US National Science Foundation, Swiss National Science Foundation, US …
Hidden Voice Commands.
N Carlini, P Mishra, T Vaidya, Y Zhang, M Sherr, C Shields, D Wagner, ...
USENIX Security Symposium, 513-530, 2016
Mandates: US National Science Foundation
Measuring Robustness to Natural Distribution Shifts in Image Classification
R Taori, A Dave, V Shankar, N Carlini, B Recht, L Schmidt
arXiv preprint arXiv:2007.00644, 2020
Mandates: US Department of Defense
Label-only membership inference attacks
CA Choquette-Choo, F Tramer, N Carlini, N Papernot
International Conference on Machine Learning, 1964-1974, 2021
Mandates: Natural Sciences and Engineering Research Council of Canada
Adversarial example defense: Ensembles of weak defenses are not strong
W He, J Wei, X Chen, N Carlini, D Song
11th {USENIX} Workshop on Offensive Technologies ({WOOT} 17), 2017
Mandates: US National Science Foundation
Evading Deepfake-Image Detectors with White-and Black-Box Attacks
N Carlini, H Farid
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
Mandates: US Department of Defense
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
F Tramèr, J Behrmann, N Carlini, N Papernot, JH Jacobsen
arXiv preprint arXiv:2002.04599, 2020
Mandates: Swiss National Science Foundation
Stateful detection of black-box adversarial attacks
S Chen, N Carlini, D Wagner
Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial …, 2020
Mandates: Hewlett Foundation
Is Private Learning Possible with Instance Encoding?
N Carlini, S Deng, S Garg, S Jha, S Mahloujifar, M Mahmoody, S Song, ...
arXiv preprint arXiv:2011.05315, 2020
Mandates: US National Science Foundation, US Department of Defense
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
M Pintor, L Demetrio, A Sotgiu, G Manca, A Demontis, N Carlini, B Biggio, ...
arXiv preprint arXiv:2106.09947, 2021
Mandates: European Commission, Government of Italy
Increasing confidence in adversarial robustness evaluations
RS Zimmermann, W Brendel, F Tramer, N Carlini
Advances in Neural Information Processing Systems 35, 13174-13189, 2022
Mandates: German Research Foundation, Federal Ministry of Education and Research, Germany
Publication and funding information is determined automatically by a computer program