Change search
ReferencesLink to record
Permanent link

Direct link
Reducing the computational footprint for real-time BCPNN learning
Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology (KTH), Sweden.
Show others and affiliations
2015 (English)In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 9, 2Article in journal (Refereed) Published
Abstract [en]

The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

Place, publisher, year, edition, pages
2015. Vol. 9, 2
Keyword [en]
Bayesian confidence propagation neural network (BCPNN), Hebbian learning, synaptic plasticity, event-driven simulation, spiking neural networks, look-up tables, fixed-point accuracy, digital neuromorphic hardware
National Category
Neurosciences Neurology Bioinformatics (Computational Biology)
URN: urn:nbn:se:su:diva-117399DOI: 10.3389/fnins.2015.00002ISI: 000352944600001PubMedID: 25657618OAI: diva2:815266


Available from: 2015-05-29 Created: 2015-05-19 Last updated: 2015-05-29Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textPubMed

Search in DiVA

By author/editor
Lansner, Anders
By organisation
Numerical Analysis and Computer Science (NADA)
In the same journal
Frontiers in Neuroscience
NeurosciencesNeurologyBioinformatics (Computational Biology)

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 15 hits
ReferencesLink to record
Permanent link

Direct link