Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Compressed Gradient Methods With Hessian-Aided Error Compensation
KTH Royal Institute of Technology.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.ORCID iD: 0000-0002-6617-8683
Number of Authors: 32021 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 69, p. 998-1011Article in journal (Refereed) Published
Abstract [en]

The emergence of big data has caused a dramatic shift in the operating regime for optimization algorithms. The performance bottleneck, which used to be computations, is now often communications. Several gradient compression techniques have been proposed to reduce the communication load at the price of a loss in solution accuracy. Recently, it has been shown how compression errors can be compensated for in the optimization algorithm to improve the solution accuracy. Even though convergence guarantees for error-compensated algorithms have been established, there is very limited theoretical support for quantifying the observed improvements in solution accuracy. In this paper, we show that Hessian-aided error compensation, unlike other existing schemes, avoids accumulation of compression errors on quadratic problems. We also present strong convergence guarantees of Hessian-based error compensation for stochastic gradient descent. Our numerical experiments highlight the benefits of Hessian-based error compensation, and demonstrate that similar convergence improvements are attained when only a diagonal Hessian approximation is used.

Place, publisher, year, edition, pages
2021. Vol. 69, p. 998-1011
Keywords [en]
Signal processing algorithms, Optimization, Error compensation, Compressors, Approximation algorithms, Machine learning algorithms, Convergence, Quantization, gradient methods, stochastic gradient descent
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Mathematics
Identifiers
URN: urn:nbn:se:su:diva-192816DOI: 10.1109/TSP.2020.3048229ISI: 000617753900001OAI: oai:DiVA.org:su-192816DiVA, id: diva2:1548503
Available from: 2021-05-01 Created: 2021-05-01 Last updated: 2022-02-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Magnússon, Sindri

Search in DiVA

By author/editor
Magnússon, Sindri
By organisation
Department of Computer and Systems Sciences
In the same journal
IEEE Transactions on Signal Processing
Electrical Engineering, Electronic Engineering, Information EngineeringMathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 20 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf