Two communication-efficient distributed algorithms
are proposed to solve a system of linear equations
Ax = b with Laplacian sparse A. A system of linear equations
with Laplacian sparse A can be found in many applications,
e.g., the power flow problems and other network flow problems.
The first algorithm is based on the gradient descent method
in optimization and the agents only share two parts of the
system state instead of that of the whole system state, which
saves significant communication. The two parts shared by every
agent through a communication link are the state information
of its own and its neighbors connected by the communication
link. The second method is obtained from an approximation of
the Newton method, which converges faster. It requires twice as
much communication as the first one but is still communicationefficient
due to the low dimension of each part shared between
agents. The convergence at a linear rate of both methods is
proved. A comprehensive comparison of the convergence rate,
communication burden, and computation costs between the
methods is made. Finally, a simulation is conducted to show
the effectiveness of both methods.
Revised: May 21, 2019 |
Published: December 17, 2018
Citation
Wang P., Y. Gao, N. Yu, W. Ren, J. Lian, and D. Wu. 2018.Communication-efficient Distributed Solutions to a System of Linear Equations with Laplacian Sparse Structure. In 2018 IEEE Conference on Decision and Control (CDC), 3367-3372. Piscataway, New Jersey:IEEE.PNNL-SA-133154.doi:10.1109/CDC.2018.8619387