Change search
Refine search result
1 - 31 of 31
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Berthet, Pierre
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Hellgren-Kotaleski, Jeanette
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Action selection performance of a reconfigurable basal ganglia inspired model with Hebbian-Bayesian Go- NoGo connectivity2012In: Frontiers in Behavioral Neuroscience, ISSN 1662-5153, E-ISSN 1662-5153, Vol. 6, 65Article in journal (Refereed)
    Abstract [en]

    Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e., the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behavior to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian-Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect(NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo, and RP system were utilized, e.g., using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioral data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.

  • 2.
    Berthet, Pierre
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Optogenetic Stimulation in a Computational Model of the Basal Ganglia Biases Action Selection and Reward Prediction Error2014In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 9, no 3, e90578Article in journal (Refereed)
    Abstract [en]

    Optogenetic stimulation of specific types of medium spiny neurons (MSNs) in the striatum has been shown to bias the selection of mice in a two choices task. This shift is dependent on the localisation and on the intensity of the stimulation but also on the recent reward history. We have implemented a way to simulate this increased activity produced by the optical flash in our computational model of the basal ganglia (BG). This abstract model features the direct and indirect pathways commonly described in biology, and a reward prediction pathway (RP). The framework is similar to Actor-Critic methods and to the ventral/ dorsal distinction in the striatum. We thus investigated the impact on the selection caused by an added stimulation in each of the three pathways. We were able to reproduce in our model the bias in action selection observed in mice. Our results also showed that biasing the reward prediction is sufficient to create a modification in the action selection. However, we had to increase the percentage of trials with stimulation relative to that in experiments in order to impact the selection. We found that increasing only the reward prediction had a different effect if the stimulation in RP was action dependent (only for a specific action) or not. We further looked at the evolution of the change in the weights depending on the stage of learning within a block. A bias in RP impacts the plasticity differently depending on that stage but also on the outcome. It remains to experimentally test how the dopaminergic neurons are affected by specific stimulations of neurons in the striatum and to relate data to predictions of our model.

  • 3.
    Berthet, Pierre
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Lindahl, Mikael
    Tully, Philip
    Hellgren-Kotaleski, Jeanette
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Functional relevance of different basal ganglia pathways investigated in a spiking model with reward dependent plasticityManuscript (preprint) (Other academic)
  • 4.
    Berthet, Pierre
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Karolinska Institute, Sweden.
    Lindahl, Mikael
    Tully, Philip J.
    Hellgren-Kotaleski, Jeanette
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Karolinska Institute, Sweden.
    Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking Model with Reward Dependent Plasticity2016In: Frontiers in Neural Circuits, ISSN 1662-5110, E-ISSN 1662-5110, Vol. 10, 53Article in journal (Refereed)
    Abstract [en]

    The brain enables animals to behaviorally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviors are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG) that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1) and the indirect (denoted D2) pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE), which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and post-synaptic activity, receptor type, and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson's disease (PD) by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.

  • 5.
    Djurfeldt, Mikael
    et al.
    Royal Institute of Technology, Computational Biology and Neurocomputing Group.
    Lundqvist, Mikael
    Royal Institute of Technology, Computational Biology and Neurocomputing Group.
    Johansson, Christopher
    Royal Institute of Technology, Computational Biology and Neurocomputing Group.
    Rehn, Martin
    Royal Institute of Technology, Computational Biology and Neurocomputing Group.
    Ekeberg, Örjan
    Royal Institute of Technology, Computational Biology and Neurocomputing Group.
    Lansner, Anders
    Royal Institute of Technology, Computational Biology and Neurocomputing Group.
    Brain-scale simulation of the neocortex on the IBM Blue Gene/L  supercomputer2008In: IBM Journal of Research and Development, ISSN 0018-8646, E-ISSN 2151-8556, Vol. 52, no 1-2, 31-41 p.Article in journal (Refereed)
    Abstract [en]

    Biologically detailed large-scale models of the brain can now be simulated thanks to increasingly powerful massively parallel supercomputers. We present an overview, for the general technical reader, of a neuronal network model of layers II/III of the neocortex built with biophysical model neurons. These simulations, carried out on an IBM Blue Gene/Le supercomputer, comprise up to 22 million neurons and 11 billion synapses, which makes them the largest simulations of this type ever performed. Such model sizes correspond to the cortex of a small mammal. The SPLIT library, used for these simulations, runs on single-processor as well as massively parallel machines. Performance measurements show good scaling behavior on the Blue Gene/L supercomputer up to 8,192 processors. Several key phenomena seen in the living brain appear as emergent phenomena in the simulations. We discuss the role of this kind of model in neuroscience and note that full-scale models may be necessary to preserve natural dynamics. We also discuss the need for software tools for the specification of models as well as for analysis and visualization of output data. Combining models that range from abstract connectionist type to biophysically detailed will help us unravel the basic principles underlying neocortical function.

  • 6. Eriksson, Johan
    et al.
    Vogel, Edward K.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). KTH Royal Institute of Technology, Sweden.
    Bergström, Fredrik
    Nyberg, Lars
    Neurocognitive Architecture of Working Memory2015In: Neuron, ISSN 0896-6273, E-ISSN 1097-4199, Vol. 88, no 1, 33-46 p.Article, review/survey (Refereed)
    Abstract [en]

    A crucial role for working memory in temporary information processing and guidance of complex behavior has been recognized for many decades. There is emerging consensus that working-memory maintenance results from the interactions among long-term memory representations and basic processes, including attention, that are instantiated as reentrant loops between frontal and posterior cortical areas, as well as sub-cortical structures. The nature of such interactions can account for capacity limitations, lifespan changes, and restricted transfer after working-memory training. Recent data and models indicate that working memory may also be based on synaptic plasticity and that working memory can operate on non-consciously perceived information.

  • 7. Fiebig, Florian
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation2017In: Journal of Neuroscience, ISSN 0270-6474, E-ISSN 1529-2401, Vol. 37, no 1, 83-96 p.Article in journal (Refereed)
    Abstract [en]

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism.

  • 8. Fiebig, Florian
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics2014In: Frontiers in Computational Neuroscience, ISSN 1662-5188, E-ISSN 1662-5188, Vol. 8, 64- p.Article in journal (Refereed)
    Abstract [en]

    Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions called systems consolidation- can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia(RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, asobserved in place cell reactivations during slow wave- sleep, are supposed to driven eocortical reinstatements and facilitate this process. We proposea functional neural network implementation of these ideas and further more suggest anextended three-state framework that includes the prefrontal cortex( PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. Wes how that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. There sulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia(AA) after simulated hippocampal lesioning; further more the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new longterm memories- similar to the effects of benzodiazepines on memory.

  • 9.
    Herman, Pawel Andrzej
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Lundqvist, Mikael
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Nested theta to gamma oscillations and precise spatiotemporal firing during memory retrieval in a simulated attractor network2013In: Brain Research, ISSN 0006-8993, E-ISSN 1872-6240, Vol. 1536, no S1, 68-87 p.Article in journal (Refereed)
    Abstract [en]

    Nested oscillations, where the phase of the underlying slow rhythm modulates the power of faster oscillations, have recently attracted considerable research attention as the increased phase-coupling of cross-frequency oscillations has been shown to relate to memory processes. Here we investigate the hypothesis that reactivations of memory patterns, induced by either external stimuli or internal dynamics, are manifested as distributed cell assemblies oscillating at gamma-like frequencies with life-times on a theta scale. For this purpose, we study the spatiotemporal oscillatory dynamics of a previously developed meso-scale attractor network model as a correlate of its memory function. The focus is on a hierarchical nested organization of neural oscillations in delta/theta (2–5 Hz) and gamma frequency bands (25–35 Hz), and in some conditions even in lower alpha band (8–12 Hz), which emerge in the synthesized field potentials during attractor memory retrieval. We also examine spiking behavior of the network in close relation to oscillations. Despite highly irregular firing during memory retrieval and random connectivity within each cell assembly, we observe precise spatiotemporal firing patterns that repeat across memory activations at a rate higher than expected from random firing. In contrast to earlier studies aimed at modeling neural oscillations, our attractor memory network allows us to elaborate on the functional context of emerging rhythms and discuss their relevance. We provide support for the hypothesis that the dynamics of coherent delta/theta oscillations constitute an important aspect of the formation and replay of neuronal assemblies.

  • 10. Kaplan, Bernhard A.
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system2014In: Frontiers in Neural Circuits, ISSN 1662-5110, E-ISSN 1662-5110, Vol. 8, 5- p.Article in journal (Refereed)
    Abstract [en]

    Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin-Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian-Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian-Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures.

  • 11. Kaplan, Bernhard A.
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Masson, Guillaume S.
    Perrinet, Laurent U.
    Anisotropic connectivity implements motion-based prediction in a spiking neural network2013In: Frontiers in Computational Neuroscience, ISSN 1662-5188, E-ISSN 1662-5188, Vol. 7, UNSP 112- p.Article in journal (Refereed)
    Abstract [en]

    Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.

  • 12. Knight, James C.
    et al.
    Tully, Philip J.
    Kaplan, Bernhard A.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden; Karolinska Institute, Sweden.
    Furber, Steve B.
    Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware2016In: Frontiers in Neuroanatomy, ISSN 1662-5129, E-ISSN 1662-5129, Vol. 10, 37Article in journal (Refereed)
    Abstract [en]

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 x 10(4) neurons and 5.1 x 10(7) plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45x more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  • 13.
    Krishnamurthy, Pradeep
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Silberberg, Gilad
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    A cortical attractor network with martinotti cells driven by facilitating synapses2012In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 7, no 4, e30752Article in journal (Refereed)
    Abstract [en]

    The population of pyramidal cells significantly outnumbers the inhibitory interneurons in the neocortex, while at the same time the diversity of interneuron types is much more pronounced. One acknowledged key role of inhibition is to control the rate and patterning of pyramidal cell firing via negative feedback, but most likely the diversity of inhibitory pathways is matched by a corresponding diversity of functional roles. An important distinguishing feature of cortical interneurons is the variability of the short-term plasticity properties of synapses received from pyramidal cells. The Martinotti cell type has recently come under scrutiny due to the distinctly facilitating nature of the synapses they receive from pyramidal cells. This distinguishes these neurons from basket cells and other inhibitory interneurons typically targeted by depressing synapses. A key aspect of the work reported here has been to pinpoint the role of this variability. We first set out to reproduce quantitatively based on in vitro data the di-synaptic inhibitory microcircuit connecting two pyramidal cells via one or a few Martinotti cells. In a second step, we embedded this microcircuit in a previously developed attractor memory network model of neocortical layers 2/3. This model network demonstrated that basket cells with their characteristic depressing synapses are the first to discharge when the network enters an attractor state and that Martinotti cells respond with a delay, thereby shifting the excitation-inhibition balance and acting to terminate the attractor state. A parameter sensitivity analysis suggested that Martinotti cells might, in fact, play a dominant role in setting the attractor dwell time and thus cortical speed of processing, with cellular adaptation and synaptic depression having a less prominent role than previously thought.

  • 14.
    Krishnamurthy, Pradeep
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology (KTH), Sweden.
    Silberberg, Gilad
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology (KTH), Sweden.
    Long-range recruitment of Martinotti cells causes surround suppression and promotes saliency in an attractor network model2015In: Frontiers in Neural Circuits, ISSN 1662-5110, E-ISSN 1662-5110, Vol. 9, 60Article in journal (Refereed)
    Abstract [en]

    Although the importance of long-range connections for cortical information processing has been acknowledged for a long time, most studies focused on the long-range interactions between excitatory cortical neurons. Inhibitory interneurons play an important role in cortical computation and have thus far been studied mainly with respect to their local synaptic interactions within the cortical microcircuitry. A recent study showed that long-range excitatory connections onto Martinotti cells (MC) mediate surround suppression. Here we have extended our previously reported attractor network of pyramidal cells (PC) and MC by introducing long-range connections targeting MC. We have demonstrated how the network with Martinotti cell-mediated long-range inhibition gives rise to surround suppression and also promotes saliency of locations at which simple non-uniformities in the stimulus field are introduced. Furthermore, our analysis suggests that the presynaptic dynamics of MC is only ancillary to its orientation tuning property in enabling the network with saliency detection. Lastly, we have also implemented a disinhibitory pathway mediated by another interneuron type (VIP interneurons), which inhibits MC and abolishes surround suppression.

  • 15.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations2009In: TINS - Trends in Neurosciences, ISSN 0166-2236, E-ISSN 1878-108X, Vol. 32, no 3, 178-186 p.Article, review/survey (Refereed)
    Abstract [en]

    The second half of the past century saw the emergence of a theory of cortical associative memory function originating in Donald Hebb's hypotheses on activity-dependent synaptic plasticity and cell-assembly formation and dynamics. This conceptual framework has today developed into a theory of attractor memory that brings together many experimental observations from different sources and levels of investigation into computational models displaying information-processing capabilities such as efficient associative memory and holistic perception. Here, we outline a development that might eventually lead to a neurobiologically grounded theory of cortical associative memory.

  • 16.
    Lansner, Anders
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Benjaminsson, Simon
    Nexa: A scalable neural simulator with integrated analysis2012In: Network: Computation in Neural Systems, ISSN 0954-898X, Vol. 23, no 4, 254-271 p.Article in journal (Refereed)
    Abstract [en]

    Large-scale neural simulations encompass challenges in simulator design, data handling and understanding of simulation output. As the computational power of supercomputers and the size of network models increase, these challenges become even more pronounced. Here we introduce the experimental scalable neural simulator Nexa, for parallel simulation of large-scale neural network models at a high level of biological abstraction and for exploration of the simulation methods involved. It includes firing-rate models and capabilities to build networks using machine learning inspired methods for e.g. self-organization of network architecture and for structural plasticity. We show scalability up to the size of the largest machines currently available for a number of model scenarios. We further demonstrate simulator integration with online analysis and real-time visualization as scalable solutions for the data handling challenges.

  • 17.
    Lansner, Anders
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Hemani, Ahmed
    Farahini, Nasim
    Spiking Brain Models: Computation, Memory and Communication Constraints for Custom Hardware Implementation2014In: 2014 19th Asia and South Pacific Design Automation Conference (ASP DAC), IEEE Computer Society, 2014, 556-562 p.Conference paper (Refereed)
    Abstract [en]

    We estimate the computational capacity required to simulate in real time the neural information processing in the human brain. We show that the computational demands of a detailed implementation are beyond reach of current technology, but that some biologically plausible reductions of problem complexity can give performance gains between two and six orders of magnitude, which put implementations within reach of tomorrow's technology.

  • 18.
    Lansner, Anders
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Marklund, Petter
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Sikström, Sverker
    Nilsson, Lars-Göran
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Reactivation in Working Memory: An Attractor Network Model of Free Recall2013In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 8, no 8, e73776Article in journal (Refereed)
    Abstract [en]

    The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view.

  • 19. Lundqvist, Mikael
    et al.
    Compte, Albert
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Bistable, Irregular Firing and Population Oscillations in a Modular Attractor Memory Network2010In: PloS Computational Biology, ISSN 1553-734X, E-ISSN 1553-7358, Vol. 6, no 6, e1000803- p.Article in journal (Refereed)
    Abstract [en]

    Attractor neural networks are thought to underlie working memory functions in the cerebral cortex. Several such models have been proposed that successfully reproduce firing properties of neurons recorded from monkeys performing working memory tasks. However, the regular temporal structure of spike trains in these models is often incompatible with experimental data. Here, we show that the in vivo observations of bistable activity with irregular firing at the single cell level can be achieved in a large-scale network model with a modular structure in terms of several connected hypercolumns. Despite high irregularity of individual spike trains, the model shows population oscillations in the beta and gamma band in ground and active states, respectively. Irregular firing typically emerges in a high-conductance regime of balanced excitation and inhibition. Population oscillations can produce such a regime, but in previous models only a non-coding ground state was oscillatory. Due to the modular structure of our network, the oscillatory and irregular firing was maintained also in the active state without fine-tuning. Our model provides a novel mechanistic view of how irregular firing emerges in cortical populations as they go from beta to gamma oscillations during memory retrieval.

  • 20.
    Lundqvist, Mikael
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Herman, Pawel
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Effect of Prestimulus Alpha Power, Phase, and Synchronization on Stimulus Detection Rates in a Biophysical Attractor Network Model2013In: Journal of Neuroscience, ISSN 0270-6474, E-ISSN 1529-2401, Vol. 33, no 29, 11817-+ p.Article in journal (Refereed)
    Abstract [en]

    Spontaneous oscillations measured by local field potentials, electroencephalograms and magnetoencephalograms exhibit a pronounced peak in the alpha band (8-12 Hz) in humans and primates. Both instantaneous power and phase of these ongoing oscillations have commonly been observed to correlate with psychophysical performance in stimulus detection tasks. We use a novel model-based approach to study the effect of prestimulus oscillations on detection rate. A previously developed biophysically detailed attractor network exhibits spontaneous oscillations in the alpha range before a stimulus is presented and transiently switches to gamma-like oscillations on successful detection. We demonstrate that both phase and power of the ongoing alpha oscillations modulate the probability of such state transitions. The power can either positively or negatively correlate with the detection rate, in agreement with experimental findings, depending on the underlying neural mechanism modulating the oscillatory power. Furthermore, the spatially distributed alpha oscillators of the network can be synchronized by global nonspecific weak excitatory signals. These synchronization events lead to transient increases in alpha-band power and render the network sensitive to the exact timing of target stimuli, making the alpha cycle function as a temporal mask in line with recent experimental observations. Our results are relevant to several studies that attribute a modulatory role to prestimulus alpha dynamics.

  • 21.
    Lundqvist, Mikael
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Herman, Pawel
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Theta and Gamma Power Increases and Alpha/Beta Power Decreases with Memory Load in an Attractor Network Model2010In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 23, no 10, 3008-3020 p.Article in journal (Refereed)
    Abstract [en]

    Changes in oscillatory brain activity are strongly correlated with performance in cognitive tasks and modulations in specific frequency bands are associated with working memory tasks. Mesoscale network models allow the study of oscillations as an emergent feature of neuronal activity. Here we extend a previously developed attractor network model, shown to faithfully reproduce single-cell activity during retention and memory recall, with synaptic augmentation. This enables the network to function as a multi-item working memory by cyclic reactivation of up to six items. The reactivation happens at theta frequency, consistently with recent experimental findings, with increasing theta power for each additional item loaded in the network's memory. Furthermore, each memory reactivation is associated with gamma oscillations. Thus, single-cell spike trains as well as gamma oscillations in local groups are nested in the theta cycle. The network also exhibits an idling rhythm in the alpha/beta band associated with a noncoding global attractor. Put together, the resulting effect is increasing theta and gamma power and decreasing alpha/beta power with growing working memory load, rendering the network mechanisms involved a plausible explanation for this often reported behavior.

  • 22.
    Lundqvist, Mikael
    et al.
    Royal Institute of Technology, Department of Computational Biology.
    Herman, Pawel
    Royal Institute of Technology, Department of Computational Biology.
    Lansner, Anders
    Royal Institute of Technology, Department of Computational Biology.
    Variability of spike firing during theta-coupled replay of memories in a simulated attractor network2012In: Brain Research, ISSN 0006-8993, E-ISSN 1872-6240, Vol. 1434, 152-161 p.Article in journal (Refereed)
    Abstract [en]

    Simulation work has recently shown that attractor networks can reproduce Poisson-like variability of single cell spiking, with coefficient of variation (Cv(2)) around unity, consistent with cortical data. However, the use of local variability (Lv) measures has revealed area- and layer-specific deviations from Poisson-like firing. In order to test these findings in silico we used a biophysically detailed attractor network model. We show that Lv well above 1, specifically found in superficial cortical layers and prefrontal areas, can indeed be reproduced in such networks and is consistent with periodic replay rather than persistent firing. The memory replay at the theta time scale provides a framework for a multi-item memory storage in the model. This article is part of a Special Issue entitled Neural Coding.

  • 23.
    Lundqvist, Mikael
    et al.
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Herman, Pawel
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Palva, M.
    Palva, S.
    Silverstein, David
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Stimulus detection rate and latency, firing rates and 1-40Hz oscillatory power are modulated by infra-slow fluctuations in a bistable attractor network model2013In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 83, 458-471 p.Article in journal (Refereed)
    Abstract [en]

    Recordings of membrane and field potentials, firing rates, and oscillation amplitude dynamics show that neuronal activity levels in cortical and subcortical structures exhibit infra-slow fluctuations (ISFs) on time scales from seconds to hundreds of seconds. Similar ISFs are salient also in blood-oxygenation-level dependent (BOLD) signals as well as in psychophysical time series. Functional consequences of ISFs are not fully understood. Here, they were investigated along with dynamical implications of ISFs in large-scale simulations of cortical network activity. For this purpose, a biophysically detailed hierarchical attractor network model displaying bistability and operating in an oscillatory regime was used. ISFs were imposed as slow fluctuations in either the amplitude or frequency of fast synaptic noise. We found that both mechanisms produced an ISF component in the synthetic local field potentials (LFPs) and modulated the power of 1-40. Hz oscillations. Crucially, in a simulated threshold-stimulus detection task (TSDT), these ISFs were strongly correlated with stimulus detection probabilities and latencies. The results thus show that several phenomena observed in many empirical studies emerge concurrently in the model dynamics, which yields mechanistic insight into how infra-slow excitability fluctuations in large-scale neuronal networks may modulate fast oscillations and perceptual processing. The model also makes several novel predictions that can be experimentally tested in future studies.

  • 24.
    Lundqvist, Mikael
    et al.
    Royal Institute of Technology, School of Numerical and Computer Science ((CSC).
    Rehn, Martin
    Royal Institute of Technology, School of Numerical and Computer Science ((CSC).
    Djurfeldt, Mikael
    Royal Institute of Technology, School of Numerical and Computer Science ((CSC).
    Lansner, Anders
    Royal Institute of Technology, School of Numerical and Computer Science ((CSC).
    Attractor dynamics in a modular network model of neocortex2006In: Network, ISSN 0954-898X, E-ISSN 1361-6536, Vol. 17, no 3, 253-276 p.Article in journal (Refereed)
    Abstract [en]

    Starting from the hypothesis that the mammalian neocortex to a first approximation functions as an associative memory of the attractor network type, we formulate a quantitative computational model of neocortical layers 2/3. The model employs biophysically detailed multi-compartmental model neurons with conductance based synapses and includes pyramidal cells and two types of inhibitory interneurons, i.e., regular spiking non-pyramidal cells and basket cells. The simulated network has a minicolumnar as well as a hypercolumnar modular structure and we propose that minicolumns rather than single cells are the basic computational units in neocortex. The minicolumns are represented in full scale and synaptic input to the different types of model neurons is carefully matched to reproduce experimentally measured values and to allow a quantitative reproduction of single cell recordings. Several key phenomena seen experimentally in vitro and in vivo appear as emergent features of this model. It exhibits a robust and fast attractor dynamics with pattern completion and pattern rivalry and it suggests an explanation for the so-called attentional blink phenomenon. During assembly dynamics, the model faithfully reproduces several features of local UP states, as they have been experimentally observed in vitro, as well as oscillatory behavior similar to that observed in the neocortex.

  • 25. Meli, Cristina
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    A modular attractor associative memory with patchy connectivity and weight pruning2013In: Network, ISSN 0954-898X, E-ISSN 1361-6536, Vol. 24, no 4, 129-150 p.Article in journal (Refereed)
    Abstract [en]

    An important research topic in neuroscience is the study of mechanisms underlying memory and the estimation of the information capacity of the biological system. In this report we investigate the performance of a modular attractor network with recurrent connections similar to the cortical long-range connections extending in the horizontal direction. We considered a single learning rule, the BCPNN, which implements a kind of Hebbian learning and we trained the network with sparse random patterns. The storage capacity was measured experimentally for networks of size between 500 and 46 K units with a constant activity level, gradually diluting the connectivity. We show that the storage capacity of the modular network with patchy connectivity is comparable with the theoretical values estimated for simple associative memories and furthermore we introduce a new technique to prune the connectivity, which enhances the storage capacity up to the asymptotic value.

  • 26. Petrovici, Mihai A.
    et al.
    Vogginger, Bernhard
    Mueller, Paul
    Breitwieser, Oliver
    Lundqvist, Mikael
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Muller, Lyle
    Ehrlich, Matthias
    Destexhe, Alain
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA).
    Schueffny, Rene
    Schemmel, Johannes
    Meier, Karlheinz
    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms2014In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 9, no 10, e108590- p.Article in journal (Refereed)
    Abstract [en]

    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.

  • 27. Sandström, Malin
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Hellgren-Kotaleski, Jeanette
    Rospars, Jean-Pierre
    Modeling the response of a population of olfactory receptor neurons to an odorant2009In: Journal of Computational Neuroscience, ISSN 0929-5313, E-ISSN 1573-6873, Vol. 27, no 3, 337-355 p.Article in journal (Refereed)
    Abstract [en]

    We modeled the firing rate of populations of olfactory receptor neurons (ORNs) responding to an odorant at different concentrations. Two cases were considered: a population of ORNs that all express the same olfactory receptor (OR), and a population that expresses many different ORs. To take into account ORN variability, we replaced single parameter values in a biophysical ORN model with values drawn from statistical distributions, chosen to correspond to experimental data. For ORNs expressing the same OR, we found that the distributions of firing frequencies are Gaussian at all concentrations, with larger mean and standard deviation at higher concentrations. For a population expressing different ORs, the distribution of firing frequencies can be described as the superposition of a Gaussian distribution and a lognormal distribution. Distributions of maximum value and dynamic range of spiking frequencies in the simulated ORN population were similar to experimental results.

  • 28. Schain, Martin
    et al.
    Benjaminsson, Simon
    Varnas, Katarina
    Forsberg, Anton
    Halldin, Christer
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Farde, Lars
    Varrone, Andrea
    Arterial input function derived from pairwise correlations between PET-image voxels2013In: Journal of Cerebral Blood Flow and Metabolism, ISSN 0271-678X, E-ISSN 1559-7016, Vol. 33, no 7, 1058-1065 p.Article in journal (Refereed)
    Abstract [en]

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [C-11]flumazenil and [C-11]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (V-T) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in V-T was good (similar to 3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  • 29. Silverstein, David N.
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden; Karolinska Institutet, Sweden.
    Is attentional blink a byproduct of neocortical attractors?2011In: Frontiers in Computational Neuroscience, ISSN 1662-5188, E-ISSN 1662-5188, Vol. 5, 13- p.Article in journal (Refereed)
    Abstract [en]

    This study proposes a computational model for attentional blink or blink of the mind, a phenomenon where a human subject misses perception of a later expected visual pattern as two expected visual patterns are presented less than 500 ms apart. A neocortical patch modeled as an attractor network is stimulated with a sequence of 14 patterns 100 ms apart, two of which are expected targets. Patterns that become active attractors are considered recognized. A neocortical patch is represented as a square matrix of hypercolumns, each containing a set of minicolumns with synaptic connections within and across both minicolumns and hypercolumns. Each minicolumn consists of locally connected layer 2/3 pyramidal cells with interacting basket cells and layer 4 pyramidal cells for input stimulation. All neurons are implemented using the Hodgkin-Huxley multi-compartmental cell formalism and include calcium dynamics, and they interact via saturating and depressing AMPA/NMDA and GABA(A) synapses. Stored patterns are encoded with global connectivity of minicolumns across hypercolumns and active patterns compete as the result of lateral inhibition in the network. Stored patterns were stimulated over time intervals to create attractor interference measurable with synthetic spike traces. This setup corresponds with item presentations in human visual attentional blink studies. Stored target patterns were depolarized while distractor patterns where hyperpolarized to represent expectation of items in working memory. Simulations replicated the basic attentional blink phenomena and showed a reduced blink when targets were more salient. Studies on the inhibitory effect of benzodiazepines on attentional blink in human subjects were compared with neocortical simulations where the GABA(A) receptor conductance and decay time were increased. Simulations showed increases in the attentional blink duration, agreeing with observations in human studies. In addition, sensitivity analysis was performed on key parameters of the model, including Ca(2+)-gated K(+) channel conductance, synaptic depression, GABA(A) channel conductance and the NMDA/AMPA ratio of charge entry.

  • 30. Tully, Philip J.
    et al.
    Lindén, Henrik
    Hennig, Matthias H.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology (KTH), Sweden; Karolinska Institute, Sweden.
    Spike-Based Bayesian-Hebbian Learning of Temporal Sequences2016In: PloS Computational Biology, ISSN 1553-734X, E-ISSN 1553-7358, Vol. 12, no 5, e1004954Article in journal (Refereed)
    Abstract [en]

    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison.

  • 31. Vogginger, Bernhard
    et al.
    Schueffny, Rene
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology (KTH), Sweden.
    Cederström, Love
    Partzsch, Johannes
    Hoeppner, Sebastian
    Reducing the computational footprint for real-time BCPNN learning2015In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 9, 2Article in journal (Refereed)
    Abstract [en]

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

1 - 31 of 31
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf