A learning rule in the Chebyshev norm for multilayer perceptrons

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An L∞ version of the back-propagation paradigm is proposed. A comparison between the L2 and the L∞ paradigms is presented, taking into account computational cost and speed of convergence. It is shown how the learning process can be formulated as an optimization problem. Experimental results from two test cases of the convergence of the L∞ algorithm are presented.

Cite

CITATION STYLE

APA

Burrascano, P., & Lucci, P. (1990). A learning rule in the Chebyshev norm for multilayer perceptrons. In Proceedings - IEEE International Symposium on Circuits and Systems (Vol. 1, pp. 211–214). Publ by IEEE. https://doi.org/10.1007/978-94-009-0643-3_89

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free