An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms

Matthew W. Jacobson, Jeffrey A. Fessler

Research output: Contribution to journalArticle

Abstract

The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.

Original languageEnglish (US)
Pages (from-to)2411-2422
Number of pages12
JournalIEEE Transactions on Image Processing
Volume16
Issue number10
DOIs
StatePublished - Oct 2007
Externally publishedYes

Fingerprint

Tangent line
Cost Function
Minimise
Iteration
Iterate
Cost functions
Dependent
Signal Processing
Image Processing
Verify
Signal processing
Image processing
Convergence Condition
Algorithm Design
Expectation-maximization Algorithm
Minimizer
Convex Sets
Optimization Techniques
Expand
Curvature

Keywords

  • Expectation-maximization (EM)
  • Majorize-minimize (MM)
  • Optimization transfer
  • SAGE

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Graphics and Computer-Aided Design
  • Software
  • Theoretical Computer Science
  • Computational Theory and Mathematics
  • Computer Vision and Pattern Recognition

Cite this

An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms. / Jacobson, Matthew W.; Fessler, Jeffrey A.

In: IEEE Transactions on Image Processing, Vol. 16, No. 10, 10.2007, p. 2411-2422.

Research output: Contribution to journalArticle

Jacobson, Matthew W. ; Fessler, Jeffrey A. / An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms. In: IEEE Transactions on Image Processing. 2007 ; Vol. 16, No. 10. pp. 2411-2422.
@article{0175ecb5e8fb423d9b627b7a2a6ad18e,
title = "An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms",
abstract = "The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.",
keywords = "Expectation-maximization (EM), Majorize-minimize (MM), Optimization transfer, SAGE",
author = "Jacobson, {Matthew W.} and Fessler, {Jeffrey A.}",
year = "2007",
month = "10",
doi = "10.1109/TIP.2007.904387",
language = "English (US)",
volume = "16",
pages = "2411--2422",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "10",

}

TY - JOUR

T1 - An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms

AU - Jacobson, Matthew W.

AU - Fessler, Jeffrey A.

PY - 2007/10

Y1 - 2007/10

N2 - The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.

AB - The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.

KW - Expectation-maximization (EM)

KW - Majorize-minimize (MM)

KW - Optimization transfer

KW - SAGE

UR - http://www.scopus.com/inward/record.url?scp=34648825731&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34648825731&partnerID=8YFLogxK

U2 - 10.1109/TIP.2007.904387

DO - 10.1109/TIP.2007.904387

M3 - Article

C2 - 17926925

AN - SCOPUS:34648825731

VL - 16

SP - 2411

EP - 2422

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 10

ER -