Articles with public access mandates - Tengyu MALearn more
Available somewhere: 38
A simple but tough-to-beat baseline for sentence embeddings
S Arora, Y Liang, T Ma
ICLR 2017, 2016
Mandates: US National Science Foundation
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
ICML 2017;arXiv preprint arXiv:1703.00573, 2017, 2017
Mandates: US National Science Foundation, US Department of Defense
A latent variable model approach to pmi-based word embeddings
S Arora, Y Li, Y Liang, T Ma, A Risteski
Transactions of the Association for Computational Linguistics 4, 385-399, 2016
Mandates: US National Science Foundation
Finding Approximate Local Minima for Nonconvex Optimization in Linear Time
N Agarwal, Z Allen-Zhu, B Bullins, E Hazan, T Ma
STOC 2017, 2016
Mandates: US National Science Foundation
Towards explaining the regularization effect of initial large learning rate in training neural networks
Y Li, C Wei, T Ma
Advances in neural information processing systems 32, 2019
Mandates: US National Science Foundation
Provable guarantees for self-supervised deep learning with spectral contrastive loss
JZ HaoChen, C Wei, A Gaidon, T Ma
Advances in Neural Information Processing Systems 34, 5000-5011, 2021
Mandates: US National Science Foundation
Linear algebraic structure of word senses, with applications to polysemy
S Arora, Y Li, Y Liang, T Ma, A Risteski
arXiv preprint arXiv:1601.03764, 2016
Mandates: US National Science Foundation
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
JD Lee, Q Lin, T Ma, T Yang
Journal of Machine Learning Research 18 (122), 1-43, 2017
Mandates: US National Science Foundation
Polynomial-time tensor decompositions with sum-of-squares
T Ma, J Shi, D Steurer
57th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2016 …, 2016
Mandates: US National Science Foundation
The implicit and explicit regularization effects of dropout
C Wei, S Kakade, T Ma
International conference on machine learning, 10181-10192, 2020
Mandates: US National Science Foundation
Label noise sgd provably prefers flat global minimizers
A Damian, T Ma, JD Lee
Advances in Neural Information Processing Systems 34, 27449-27461, 2021
Mandates: US National Science Foundation, US Department of Defense
Data-dependent sample complexity of deep neural networks via lipschitz augmentation
C Wei, T Ma
Advances in Neural Information Processing Systems 32, 2019
Mandates: US National Science Foundation
Shape matters: Understanding the implicit bias of the noise covariance
JZ HaoChen, C Wei, J Lee, T Ma
Conference on Learning Theory, 2315-2357, 2021
Mandates: US National Science Foundation, US Department of Defense
Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation
K Shen, RM Jones, A Kumar, SM Xie, JZ HaoChen, T Ma, P Liang
International conference on machine learning, 19847-19878, 2022
Mandates: US National Science Foundation, US Department of Defense
Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning
C Wei, SM Xie, T Ma
Advances in Neural Information Processing Systems 34, 16158-16170, 2021
Mandates: US National Science Foundation, US Department of Defense
Self-training avoids using spurious features under domain shift
Y Chen, C Wei, A Kumar, T Ma
Advances in Neural Information Processing Systems 33, 21061-21071, 2020
Mandates: US National Science Foundation
Individual calibration with randomized forecasting
S Zhao, T Ma, S Ermon
International Conference on Machine Learning, 11387-11397, 2020
Mandates: US National Science Foundation, US Department of Defense
Safe reinforcement learning by imagining the near future
G Thomas, Y Luo, T Ma
Advances in Neural Information Processing Systems 34, 13859-13869, 2021
Mandates: US National Science Foundation, US Department of Defense
Statistically meaningful approximation: a case study on approximating turing machines with transformers
C Wei, Y Chen, T Ma
Advances in Neural Information Processing Systems 35, 12071-12083, 2022
Mandates: US National Science Foundation
Calibrating predictions to decisions: A novel approach to multi-class calibration
S Zhao, M Kim, R Sahoo, T Ma, S Ermon
Advances in Neural Information Processing Systems 34, 22313-22324, 2021
Mandates: US National Science Foundation, US Department of Defense
Publication and funding information is determined automatically by a computer program