Abstract
The expectation-maximization (EM) algorithm is a well-known iterative method for computing maximum likelihood estimates in a variety of statistical problems. Despite its numerous advantages, a main drawback of the EM algorithm is its frequently observed slow convergence which often hinders the application of EM algorithms in high-dimensional problems or in other complex settings. To address the need for more rapidly convergent EM algorithms, we describe a new class of acceleration schemes that build on the Anderson acceleration technique for speeding fixed-point iterations. Our approach is effective at greatly accelerating the convergence of EM algorithms and is automatically scalable to high-dimensional settings. Through the introduction of periodic algorithm restarts and a damping factor, our acceleration scheme provides faster and more robust convergence when compared to un-modified Anderson acceleration, while also improving global convergence. Crucially, our method works as an “off-the-shelf” method in that it may be directly used to accelerate any EM algorithm without relying on the use of any model-specific features or insights. Through a series of simulation studies involving five representative problems, we show that our algorithm is substantially faster than the existing state-of-art acceleration schemes. The acceleration schemes described in this paper are implemented in the R package daarem which is available from the comprehensive R archive network (https://cran.r-project.org). Supplementary materials for this article are available online.
Original language | English (US) |
---|---|
Pages (from-to) | 834-846 |
Number of pages | 13 |
Journal | Journal of Computational and Graphical Statistics |
Volume | 28 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2 2019 |
Keywords
- Algorithm restarts
- Convergence acceleration
- MM algorithm
- Quasi-Newton
- SQUAREM
ASJC Scopus subject areas
- Statistics and Probability
- Discrete Mathematics and Combinatorics
- Statistics, Probability and Uncertainty