A digital neural network architecture for VLSI

58Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An approach to solving the two most serious shortcomings of previous artificial neural network implementations is discussed. A flexible architecture that permits the realization of arbitrary network topologies and dimensions is presented. Furthermore, the performance of this architecture is independent of the size of the network and permits the processing of typically 100,000 patterns per second. The key innovation is the representation of neuron activations and synaptic weights as stochastic functions of time, leading to efficient implementations of the synapses. High densities of synapses per silicon area, exceeding even analog implementations, have been achieved. Finally, the neuron activations are represented digitally, as are the synaptic computations, thereby permitting fabrication of digital neural network architectures using a variety of standard, low-cost semiconductor processes. A pair of general-purpose chips that permit post facto construction of neural networks of arbitrary topology and virtually unlimited dimensions is presented.

Cite

CITATION STYLE

APA

Tomlinson, M. S., Walker, D. J., & Sivilotti, M. A. (1990). A digital neural network architecture for VLSI. In IJCNN. International Joint Conference on Neural Networks (pp. 545–550). Publ by IEEE. https://doi.org/10.1109/ijcnn.1990.137764

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free