Computational experience with Pseudoinversion-Based training of neural networks using random projection matrices

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently some novel strategies have been proposed for neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by Moore-Penrose generalised inverse; such non-iterative strategies are appealing since they allow fast learning. Aim of this study is to investigate the performance variability when random projections are used for convenient setting of the input weights: we compare them with state of the art setting i.e. weights randomly chosen according to a continuous uniform distribution. We compare the solutions obtained by different methods testing this approach on some UCI datasets for both regression and classification tasks; this results in a significant performance improvement with respect to conventional method.

Cite

CITATION STYLE

APA

Rubini, L., Cancelliere, R., Gallinari, P., Grosso, A., & Raiti, A. (2014). Computational experience with Pseudoinversion-Based training of neural networks using random projection matrices. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8722, 236–245. https://doi.org/10.1007/978-3-319-10554-3_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free