Articles with public access mandates - Tengyu MA - US Department of DefenseLearn more
NoteFor this mandate, articles should be available from specific locations.
Available based on mandate: 13
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
ICML 2017;arXiv preprint arXiv:1703.00573, 2017, 2017
Label noise sgd provably prefers flat global minimizers
A Damian, T Ma, JD Lee
Advances in Neural Information Processing Systems 34, 27449-27461, 2021
Shape matters: Understanding the implicit bias of the noise covariance
JZ HaoChen, C Wei, J Lee, T Ma
Conference on Learning Theory, 2315-2357, 2021
Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation
K Shen, RM Jones, A Kumar, SM Xie, JZ HaoChen, T Ma, P Liang
International conference on machine learning, 19847-19878, 2022
Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning
C Wei, SM Xie, T Ma
Advances in Neural Information Processing Systems 34, 16158-16170, 2021
Individual calibration with randomized forecasting
S Zhao, T Ma, S Ermon
International Conference on Machine Learning, 11387-11397, 2020
Safe reinforcement learning by imagining the near future
G Thomas, Y Luo, T Ma
Advances in Neural Information Processing Systems 34, 13859-13869, 2021
Calibrating predictions to decisions: A novel approach to multi-class calibration
S Zhao, M Kim, R Sahoo, T Ma, S Ermon
Advances in Neural Information Processing Systems 34, 22313-22324, 2021
Learning barrier certificates: Towards safe reinforcement learning with zero training-time violations
Y Luo, T Ma
Advances in Neural Information Processing Systems 34, 25621-25632, 2021
On the expressivity of neural networks for deep reinforcement learning
K Dong, Y Luo, T Yu, C Finn, T Ma
International conference on machine learning, 2627-2637, 2020
Same pre-training loss, better downstream: Implicit bias matters for language models
H Liu, SM Xie, Z Li, T Ma
International Conference on Machine Learning, 22188-22214, 2023
Composed fine-tuning: Freezing pre-trained denoising autoencoders for improved generalization
SM Xie, T Ma, P Liang
International Conference on Machine Learning, 11424-11435, 2021
Beyond lazy training for over-parameterized tensor decomposition
X Wang, C Wu, JD Lee, T Ma, R Ge
Advances in Neural Information Processing Systems 33, 21934-21944, 2020
Publication and funding information is determined automatically by a computer program