A stochastic quasi-Newton method for online convex optimization

ISSN: 15324435
198Citations
Citations of this article
166Readers
Mendeley users who have this article in their library.

Abstract

We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Schraudolph, N. N., Yu, J., & Günter, S. (2007). A stochastic quasi-Newton method for online convex optimization. In Journal of Machine Learning Research (Vol. 2, pp. 436–443).

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 79

62%

Researcher 31

24%

Professor / Associate Prof. 13

10%

Lecturer / Post doc 5

4%

Readers' Discipline

Tooltip

Computer Science 72

62%

Mathematics 23

20%

Engineering 15

13%

Earth and Planetary Sciences 7

6%

Save time finding and organizing research with Mendeley

Sign up for free