Sequential Sample Average Majorization–Minimization
Résumé
Many statistical inference and machine learning methods rely on the ability to optimize an expectation functional, whose explicit form is intractable. The typical method for conducting such optimization is to approximate the expected value problem by a size-N sample average, often referred to as sample average approximation (SAA) or M-estimation. When the solution to the SAA problem cannot be obtained in closed form, the majorization-minimization (MM) algorithm framework constitutes a broad class of incremental optimization solutions, relying on the iterative construction of surrogates, known as majorizers, of the original problem. The ability to solve an SAA problem depends on the availability of all N observations, contemporaneously, which is difficult when N is large or data are observed as a stream. We propose a stochastic MM algorithm that solves the expected value problem via iterative SAA majorizer constructions using sequential subsets of data, which we call Sequential Sample Average Majorization–Minimization (SAM2). Compared to previous stochastic MM algorithm variants, our method permit an extended definition of majorizers, and does not rely on convexity and smoothness assumptions or make functional restrictions on the class of problems and majorizers. We develop a theory of stochastic convergence for SAM2, made possible via the presentation of a novel double array uniform strong law of large numbers. Examples of SAM2 algorithms are given along with a numerical demonstration of SAM2 to the quantile regression problem.
Domaines
Statistiques [stat]Origine | Fichiers produits par l'(les) auteur(s) |
---|