Follow
Niloofar Mireshghallah
Niloofar Mireshghallah
Other namesFatemeh Mireshghallah
Postdoctoral scholar, University of Washington
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
Privacy in deep learning: A survey
F Mireshghallah, M Taram, P Vepakomma, A Singh, R Raskar, ...
arXiv preprint arXiv:2004.12254, 2020
189*2020
What does it mean for a language model to preserve privacy?
H Brown, K Lee, F Mireshghallah, R Shokri, F Tramèr
Proceedings of the 2022 ACM conference on fairness, accountability, and …, 2022
1812022
Quantifying privacy risks of masked language models using membership inference attacks
F Mireshghallah, K Goyal, A Uniyal, T Berg-Kirkpatrick, R Shokri
arXiv preprint arXiv:2203.03929, 2022
1112022
Shredder: Learning noise distributions to protect inference privacy
F Mireshghallah, M Taram, P Ramrakhyani, A Jalali, D Tullsen, ...
Proceedings of the Twenty-Fifth International Conference on Architectural …, 2020
109*2020
Releq: A reinforcement learning approach for deep quantization of neural networks
AT Elthakeb, P Pilligundla, FS Mireshghallah, A Yazdanbakhsh, ...
arXiv preprint arXiv:1811.01704, 2018
101*2018
Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy
T Farrand, F Mireshghallah, S Singh, A Trask
Proceedings of the 2020 workshop on privacy-preserving machine learning in …, 2020
1002020
Membership inference attacks against language models via neighbourhood comparison
J Mattern, F Mireshghallah, Z Jin, B Schölkopf, M Sachan, ...
arXiv preprint arXiv:2305.18462, 2023
812023
An empirical analysis of memorization in fine-tuned autoregressive language models
F Mireshghallah, A Uniyal, T Wang, DK Evans, T Berg-Kirkpatrick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
78*2022
Mix and match: Learning-free controllable text generation using energy language models
F Mireshghallah, K Goyal, T Berg-Kirkpatrick
arXiv preprint arXiv:2203.13299, 2022
652022
Benchmarking differential privacy and federated learning for bert models
P Basu, TS Roy, R Naidu, Z Muftuoglu, S Singh, F Mireshghallah
arXiv preprint arXiv:2106.13973, 2021
642021
Not all features are equal: Discovering essential features for preserving prediction privacy
F Mireshghallah, M Taram, A Jalali, ATT Elthakeb, D Tullsen, ...
Proceedings of the Web Conference 2021, 669-680, 2021
57*2021
Flute: A scalable, extensible framework for high-performance federated learning simulations
DD Mirian Hipolito Garcia, Andre Manoel, Daniel Madrigal Diaz, Fatemehsadat ...
arXiv preprint arXiv:2203.13789, 2022
49*2022
Smaller Language Models are Better Zero-shot Machine-Generated Text Detectors
N Mireshghallah, J Mattern, S Gao, R Shokri, T Berg-Kirkpatrick
Proceedings of the 18th Conference of the European Chapter of the …, 2024
43*2024
Do membership inference attacks work on large language models?
M Duan, A Suri, N Mireshghallah, S Min, W Shi, L Zettlemoyer, Y Tsvetkov, ...
arXiv preprint arXiv:2402.07841, 2024
38*2024
Dp-sgd vs pate: Which has less disparate impact on model accuracy?
A Uniyal, R Naidu, S Kotti, S Singh, PJ Kenfack, F Mireshghallah, A Trask
arXiv preprint arXiv:2106.12576, 2021
362021
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
F Mireshghallah, HA Inan, M Hasegawa, V Rühle, T Berg-Kirkpatrick, ...
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
332021
Can llms keep a secret? testing privacy implications of language models via contextual integrity theory
N Mireshghallah, H Kim, X Zhou, Y Tsvetkov, M Sap, R Shokri, Y Choi
arXiv preprint arXiv:2310.17884, 2023
322023
UserIdentifier: implicit user representations for simple and effective personalized sentiment analysis
F Mireshghallah, V Shrivastava, M Shokouhi, T Berg-Kirkpatrick, R Sim, ...
arXiv preprint arXiv:2110.00135, 2021
312021
A roadmap to pluralistic alignment
T Sorensen, J Moore, J Fisher, M Gordon, N Mireshghallah, CM Rytting, ...
arXiv preprint arXiv:2402.05070, 2024
302024
Releq: An automatic reinforcement learning approach for deep quantization of neural networks
A Yazdanbakhsh, AT Elthakeb, P Pilligundla, F Mireshghallah, ...
arXiv preprint arXiv:1811.01704 1 (2), 2018
282018
The system can't perform the operation now. Try again later.
Articles 1–20