This paper addresses the problem of learning online finite statistical mixtures of regular exponential families. We first start by reviewing concisely the gradient-based and stochastic gradient-based optimization methods and their generalizations. We then focuses on two stochastic versions of the celebrated Expectation-Maximization (EM) algorithm: Titterington’s second-order stochastic gradient EM and Cappé and Moulines’ online EM. Depending on which step of EM is approximated, the possible constraints on the mixture parameters may be violated. A justification of these approaches as well as ready-to-use formulas for mixtures of regular exponential families are provided. Finally, to illustrate our study, some experimental comparisons on univariate normal mixtures are provided.
CITATION STYLE
Saint-Jean, C., & Nielsen, F. (2017). Batch and Online Mixture Learning: A Review with Extensions (pp. 267–299). https://doi.org/10.1007/978-3-319-47058-0_11
Mendeley helps you to discover research relevant for your work.