Authors
Alexander Ulanov, Andrey Simanovsky, Manish Marwah
Publication date
2017/4/19
Conference
2017 ieee 33rd international conference on data engineering (icde)
Pages
1249-1254
Publisher
IEEE
Description
Present day machine learning is computationally intensive and processes large amounts of data. It is implemented in a distributed fashion in order to address these scalability issues. The work is parallelized across a number of computing nodes. It is usually hard to estimate in advance how many nodes to use for a particular workload. We propose a simple framework for estimating the scalability of distributed machine learning algorithms. We measure the scalability by means of the speedup an algorithm achieves with more nodes. We propose time complexity models for gradient descent and graphical model inference. We validate the gradient descent model with experiments on deep learning training and graphical inferences with experiments on loopy belief propagation. The proposed framework was used to study the scalability of machine learning algorithms in Apache Spark.
Total citations
2017201820192020202120222023202423321141
Scholar articles
A Ulanov, A Simanovsky, M Marwah - 2017 ieee 33rd international conference on data …, 2017