Authors
Behnaz Arzani, Siva Kesava Reddy Kakarla, Miguel Castro, Srikanth Kandula, Saeed Maleki, Luke Marshall
Publication date
2023/5/22
Journal
arXiv preprint arXiv:2305.13479
Description
We show communication schedulers' recent work proposed for ML collectives does not scale to the increasing problem sizes that arise from training larger models. These works also often produce suboptimal schedules. We make a connection with similar problems in traffic engineering and propose a new method, TECCL, that finds better quality schedules (e.g., finishes collectives faster and/or while sending fewer bytes) and does so more quickly on larger topologies. We present results on many different GPU topologies that show substantial improvement over the state-of-the-art.
Total citations
Scholar articles
B Arzani, SKR Kakarla, M Castro, S Kandula, S Maleki… - arXiv preprint arXiv:2305.13479, 2023