An L∞ version of the back-propagation paradigm is proposed. A comparison between the L2 and the L∞ paradigms is presented, taking into account computational cost and speed of convergence. It is shown how the learning process can be formulated as an optimization problem. Experimental results from two test cases of the convergence of the L∞ algorithm are presented.
CITATION STYLE
Burrascano, P., & Lucci, P. (1990). A learning rule in the Chebyshev norm for multilayer perceptrons. In Proceedings - IEEE International Symposium on Circuits and Systems (Vol. 1, pp. 211–214). Publ by IEEE. https://doi.org/10.1007/978-94-009-0643-3_89
Mendeley helps you to discover research relevant for your work.