Reinforcement learning with low-complexity liquid state machines

16Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.

References Powered by Scopus

Human-level control through deep reinforcement learning

22578Citations
N/AReaders
Get full text

Learning representations by back-propagating errors

20766Citations
N/AReaders
Get full text

Mastering the game of Go with deep neural networks and tree search

12806Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms to Hardware

84Citations
N/AReaders
Get full text

Two-dimensional materials for bio-realistic neuronal computing networks

19Citations
N/AReaders
Get full text

LSMCore: A 69k-Synapse/mm<sup>2</sup>Single-Core Digital Neuromorphic Processor for Liquid State Machine

18Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Ponghiran, W., Srinivasan, G., & Roy, K. (2019). Reinforcement learning with low-complexity liquid state machines. Frontiers in Neuroscience, 13(AUG). https://doi.org/10.3389/fnins.2019.00883

Readers over time

‘19‘20‘21‘22‘23‘2405101520

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 22

85%

Professor / Associate Prof. 2

8%

Researcher 2

8%

Readers' Discipline

Tooltip

Computer Science 11

42%

Engineering 10

38%

Agricultural and Biological Sciences 3

12%

Neuroscience 2

8%

Save time finding and organizing research with Mendeley

Sign up for free
0