Multilevel stochastic gradient methods for nested composition optimization

Shuoguang Yang, Mengdi Wang, Xingyan Ethan Fang

Research output: Contribution to journalArticle

Abstract

Stochastic gradient methods are scalable for solving large-scale optimization problems that involve empirical expectations of loss functions. Existing results mainly apply to optimization problems where the objectives are one- or two-level expectations. In this paper, we consider the multilevel composition optimization problem that involves compositions of multilevel component functions and nested expectations over a random path. This finds applications in risk-averse optimization and sequential planning. We propose a class of multilevel stochastic gradient methods that are motivated by the method of multitimescale stochastic approximation. First, we propose a basic T -level stochastic compositional gradient algorithm. Then we develop accelerated multilevel stochastic gradient methods by using an extrapolation-interpolation scheme to take advantage of the smoothness of individual component functions. When all component functions are smooth, we show that the convergence rate improves to O(n -4 =(7+T )) for general objectives and O (n-4=(3+T ) ) for strongly convex objectives. We also provide almost sure convergence and rate of convergence results for nonconvex problems. The proposed methods and theoretical results are validated using numerical experiments.

Original languageEnglish (US)
Pages (from-to)616-659
Number of pages44
JournalSIAM Journal on Optimization
Volume29
Issue number1
DOIs
Publication statusPublished - Jan 1 2019
Externally publishedYes

    Fingerprint

Keywords

  • Convex optimization
  • Sample complexity
  • Simulation
  • Statistical learning
  • Stochastic gradient
  • Stochastic optimization

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Cite this