Authors
Fatemeh Barani, Abdorreza Savadi, Hadi Sadoghi Yazdi
Publication date
2021/6/1
Journal
Signal Processing
Volume
183
Pages
108014
Publisher
Elsevier
Description
Stochastic gradient descent (SGD) is a well-known method in machine learning that takes advantage of lower computational complexity for large-scale problems. Distributed learning provides a good framework to manage such problems and avoids data aggregation in a central workstation, and also saves time and energy. Diffusion strategies can be applied to solve distributed learning problems. In this paper, we present a new distributed algorithm based on diffusion strategies of the SGD type, called the diffusion SGD algorithm and investigate its convergence behavior for solving linear prediction problems. We prove the convergence of the proposed algorithm by using a mathematical formulation for its progression process and obtain an upper bound of the errors made by the update rule. Experiments on the system identification problems confirm our theoretical findings. The simulation results comparing with state …
Total citations
2020202120222023202412652