Follow
Manli Shu
Title
Cited by
Cited by
Year
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models
M Shu, W Nie, DA Huang, Z Yu, T Goldstein, A Anandkumar, C Xiao
Conference on Neural Information Processing Systems (NeurIPS), 2022
2112022
On the reliability of watermarks for large language models
J Kirchenbauer, J Geiping, Y Wen, M Shu, K Saifullah, K Kong, ...
arXiv preprint arXiv:2306.04634, 2023
1072023
What do vision transformers learn? a visual exploration
A Ghiasi, H Kazemi, E Borgnia, S Reich, M Shu, M Goldblum, AG Wilson, ...
arXiv preprint arXiv:2212.06727, 2022
562022
On the exploitability of instruction tuning
M Shu, J Wang, C Zhu, J Geiping, C Xiao, T Goldstein
Advances in Neural Information Processing Systems 36, 61836-61856, 2023
492023
Gradient-Free Adversarial Training against Image Corruption for Learning-based Steering
Y Shen, L Zheng, M Shu, W Li, T Goldstein, M Lin
Conference on Neural Information Processing Systems (NeurIPS), 2021
40*2021
Encoding Robustness to Image Style via Adversarial Feature Perturbation
M Shu, Z Wu, M Goldblum, T Goldstein
Conference on Neural Information Processing Systems (NeurIPS), 2021
33*2021
Battle of the backbones: A large-scale comparison of pretrained models across computer vision tasks
M Goldblum, H Souri, R Ni, M Shu, V Prabhu, G Somepalli, ...
Advances in Neural Information Processing Systems 36, 2024
322024
The Close Relationship Between Contrastive Learning and Meta-Learning
R Ni, M Shu, H Souri, M Goldblum, T Goldstein
International Conference on Learning Representations (ICLR), 2021
212021
Bring your own data! self-supervised evaluation for large language models
N Jain, K Saifullah, Y Wen, J Kirchenbauer, M Shu, A Saha, M Goldblum, ...
arXiv preprint arXiv:2306.13651, 2023
192023
Coercing LLMs to do and reveal (almost) anything
J Geiping, A Stein, M Shu, K Saifullah, Y Wen, T Goldstein
arXiv preprint arXiv:2402.14020, 2024
162024
Adversarial Differentiable Data Augmentation for Autonomous Systems
M Shu, Y Shen, MC Lin, T Goldstein
International Conference on Robotics and Automation (ICRA), 2021
152021
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
R Levin, M Shu, E Borgnia, F Huang, M Goldblum, T Goldstein
Conference on Neural Information Processing Systems (NeurIPS), 2022, 2021
92021
Headless horseman: Adversarial attacks on transfer learning models
A Abdelkader, MJ Curry, L Fowl, T Goldstein, A Schwarzschild, M Shu, ...
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
52020
Shadowcast: Stealthy data poisoning attacks against vision-language models
Y Xu, J Yao, M Shu, Y Sun, Z Wu, N Yu, T Goldstein, F Huang
arXiv preprint arXiv:2402.06659, 2024
42024
Towards accurate quantization and pruning via data-free knowledge transfer
C Zhu, Z Xu, A Shafahi, M Shu, A Ghiasi, T Goldstein
arXiv preprint arXiv:2010.07334, 2020
32020
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
A Awadalla, L Xue, O Lo, M Shu, H Lee, EK Guha, M Jordan, S Shen, ...
arXiv preprint arXiv:2406.11271, 2024
12024
Hierarchical Point Attention for Indoor 3D Object Detection
M Shu, L Xue, N Yu, R Martín-Martín, C Xiong, T Goldstein, JC Niebles, ...
2024 IEEE International Conference on Robotics and Automation (ICRA), 4245-4251, 2024
1*2024
xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
C Qin, C Xia, K Ramakrishnan, M Ryoo, L Tu, Y Feng, M Shu, H Zhou, ...
arXiv preprint arXiv:2408.12590, 2024
2024
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
L Xue, M Shu, A Awadalla, J Wang, A Yan, S Purushwalkam, H Zhou, ...
arXiv preprint arXiv:2408.08872, 2024
2024
Systems and methods for attention mechanism in three-dimensional object detection
M Shu, L Xue, N Yu, R Martín-Martín, JCN Duque, C Xiong, R Xu
US Patent App. 18/161,661, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20