Change search
ReferencesLink to record
Permanent link

Direct link
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden; Karolinska Institute, Sweden.
Show others and affiliations
Number of Authors: 5
2016 (English)In: Frontiers in Neuroanatomy, ISSN 1662-5129, E-ISSN 1662-5129, Vol. 10, 37Article in journal (Refereed) Published
Abstract [en]

SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 x 10(4) neurons and 5.1 x 10(7) plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45x more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

Place, publisher, year, edition, pages
2016. Vol. 10, 37
Keyword [en]
SpiNNaker, learning, plasticity, digital neuromorphic hardware, Bayesian confidence propagation neural network (BCPNN), event-driven simulation, fixed-point accuracy
National Category
Neurosciences Neurology Computer Science
Identifiers
URN: urn:nbn:se:su:diva-130177DOI: 10.3389/fnana.2016.00037ISI: 000373595100002PubMedID: 27092061OAI: oai:DiVA.org:su-130177DiVA: diva2:927141
Available from: 2016-05-11 Created: 2016-05-09 Last updated: 2016-05-11Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textPubMed

Search in DiVA

By author/editor
Lansner, Anders
By organisation
Numerical Analysis and Computer Science (NADA)
In the same journal
Frontiers in Neuroanatomy
NeurosciencesNeurologyComputer Science

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 11 hits
ReferencesLink to record
Permanent link

Direct link