Scalable MCMC for large data problems using data subsampling and the difference estimator
(English)Manuscript (preprint) (Other academic)
We propose a generic Markov Chain Monte Carlo (MCMC) algorithm to speed up computations for data sets with many observations. A key feature of our approach is the use of the highly efficient difference estimator from the survey sampling literature to estimate the log-likelihood accurately using only a small fraction of the data. Our algorithm improves on the O(n) complexity of regular MCMC by operating over local data clusters instead of the full sample when computing the likelihood. The likelihood estimate is used in a Pseudo-marginal framework to sample from a perturbed posterior which is within O(m-1/2) of the true posterior, where m is the subsample size. The method is applied to a logistic regression model to predict firm bankruptcy for a large data set. We document a significant speed up in comparison to the standard MCMC on the full data set.
Bayesian inference, Markov Chain Monte Carlo, Pseudo-marginal MCMC, estimated likelihood, GLM for large data
Probability Theory and Statistics
Research subject Statistics
IdentifiersURN: urn:nbn:se:su:diva-118137OAI: oai:DiVA.org:su-118137DiVA: diva2:820454