Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Hamiltonian Monte Carlo with Energy Conserving Subsampling
Show others and affiliations
Number of Authors: 52019 (English)In: Journal of machine learning research, ISSN 1532-4435, E-ISSN 1533-7928, Vol. 20, p. 1-31, article id 100Article in journal (Refereed) Published
Abstract [en]

Hamiltonian Monte Carlo (HMC) samples efficiently from high-dimensional posterior distributions with proposed parameter draws obtained by iterating on a discretized version of the Hamiltonian dynamics. The iterations make HMC computationally costly, especially in problems with large data sets, since it is necessary to compute posterior densities and their derivatives with respect to the parameters. Naively computing the Hamiltonian dynamics on a subset of the data causes HMC to lose its key ability to generate distant parameter proposals with high acceptance probability. The key insight in our article is that efficient subsampling HMC for the parameters is possible if both the dynamics and the acceptance probability are computed from the same data subsample in each complete HMC iteration. We show that this is possible to do in a principled way in a HMC-within-Gibbs framework where the subsample is updated using a pseudo marginal MH step and the parameters are then updated using an HMC step, based on the current subsample. We show that our subsampling methods are fast and compare favorably to two popular sampling algorithms that use gradient estimates from data subsampling. We also explore the current limitations of subsampling HMC algorithms by varying the quality of the variance reducing control variates used in the estimators of the posterior density and its gradients.

Place, publisher, year, edition, pages
2019. Vol. 20, p. 1-31, article id 100
Keywords [en]
Bayesian inference, Big Data, Markov chain Monte Carlo, Estimated likelihood, Stochastic gradient Hamiltonian Monte Carlo, Stochastic Gradient Langevin Dynamics
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer and Information Sciences
Identifiers
URN: urn:nbn:se:su:diva-172085ISI: 000476621700001OAI: oai:DiVA.org:su-172085DiVA, id: diva2:1345213
Available from: 2019-08-23 Created: 2019-08-23 Last updated: 2019-08-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Free full text

Search in DiVA

By author/editor
Quiroz, MatiasVillani, Mattias
By organisation
Department of Statistics
In the same journal
Journal of machine learning research
Electrical Engineering, Electronic Engineering, Information EngineeringComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 1 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf