Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A General Framework to Distribute Iterative Algorithms With Localized Information Over Networks
RISE Research Institutes of Sweden, Sweden.
Massachusetts Institute of Technology, USA.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0002-6617-8683
KTH Royal Institute of Technology, Sweden.
Number of Authors: 42023 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 68, no 12, p. 7358-7373Article in journal (Refereed) Published
Abstract [en]

Emerging applications in the Internet of Things (IoT) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibits similarities, for the tasks of both algorithm design and theoretical analysis, there is still no unified method or framework for accomplishing these tasks. This article develops such a general framework for distributing the execution of (centralized) iterative algorithms over networks in which the required information or data is partitioned between the nodes in the network. This article furthermore shows that the distributed iterative algorithm, which results from the proposed framework, retains the convergence properties (rate) of the original (centralized) iterative algorithm. In addition, this article applies the proposed general framework to several interesting example applications, obtaining results comparable to the state of the art for each such example, while greatly simplifying and generalizing their convergence analysis. These example applications reveal new results for distributed proximal versions of gradient descent, the heavy ball method, and Newton's method. For example, these results show that the dependence on the condition number for the convergence rate of this distributed heavy ball method is at least as good as that of centralized gradient descent.

Place, publisher, year, edition, pages
2023. Vol. 68, no 12, p. 7358-7373
Keywords [en]
Distributed algorithms, optimization algorithms, communication networks, agents and autonomous systems
National Category
Computer Sciences
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:su:diva-222620DOI: 10.1109/TAC.2023.3279901ISI: 001122871700040Scopus ID: 2-s2.0-85161038450OAI: oai:DiVA.org:su-222620DiVA, id: diva2:1804772
Available from: 2023-10-13 Created: 2023-10-13 Last updated: 2024-03-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Magnússon, Sindri

Search in DiVA

By author/editor
Magnússon, Sindri
By organisation
Department of Computer and Systems Sciences
In the same journal
IEEE Transactions on Automatic Control
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 34 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf