Change search
Link to record
Permanent link

Direct link
Lansner, Anders, ProfessorORCID iD iconorcid.org/0000-0002-2358-7815
Alternative names
Publications (10 of 46) Show all publications
Al Hafiz, M. I., Ravichandran, N., Lansner, A., Herman, P. & Podobas, A. (2025). A Reconfigurable Stream-Based FPGA Accelerator for Bayesian Confidence Propagation Neural Networks. In: Roberto Giorgi; Mirjana Stojilović; Dirk Stroobandt; Piedad Brox Jiménez; Ángel Barriga Barros (Ed.), Applied Reconfigurable Computing. Architectures, Tools, and Applications: 21st International Symposium, ARC 2025, Seville, Spain, April 9–11, 2025, Proceedings. Paper presented at 1st International Symposium on Applied Reconfigurable Computing (ARC 2025), Sevilla, Spain, April 9-11, 2025 (pp. 196-213). Cham: Springer
Open this publication in new window or tab >>A Reconfigurable Stream-Based FPGA Accelerator for Bayesian Confidence Propagation Neural Networks
Show others...
2025 (English)In: Applied Reconfigurable Computing. Architectures, Tools, and Applications: 21st International Symposium, ARC 2025, Seville, Spain, April 9–11, 2025, Proceedings / [ed] Roberto Giorgi; Mirjana Stojilović; Dirk Stroobandt; Piedad Brox Jiménez; Ángel Barriga Barros, Cham: Springer, 2025, p. 196-213Conference paper, Published paper (Refereed)
Abstract [en]

Brain-like algorithms are attractive and emerging alternatives to classical deep learning methods for use in various machine learning applications. Brain-like systems can feature local learning rules, both unsupervised/semi-supervised learning and different types of plasticity (structural/synaptic), allowing them to potentially be faster and more energy-efficient than traditional machine learning alternatives. Among the more salient brain-like algorithms are Bayesian Confidence Propagation Neural Networks (BCPNNs). BCPNN is an important tool for both machine learning and computational neuroscience research, and recent work shows that BCPNN can reach state-of-the-art performance in tasks such as learning and memory recall compared to other models. Unfortunately, BCPNN is primarily executed on slow general-purpose processors (CPUs) or power-hungry graphics processing units (GPUs), reducing the applicability of using BCPNN in Edge systems, among others. In this work, we design a reconfigurable stream-based accelerator for BCPNN using Field-Programmable Gate Arrays (FPGA) using Xilinx Vitis High-Level Synthesis (HLS) flow. Furthermore, we model our accelerator’s performance using first principles, and we empirically show that our proposed accelerator (full-featured kernel non-structural plasticity) is between 1.3x - 5.3x faster than an Nvidia A100 GPU while at the same time consuming between 2.62x - 3.19x less power and 5.8x - 16.5x less energy without any degradation in performance.

Place, publisher, year, edition, pages
Cham: Springer, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15594
Keywords
BCPNN, FPGA, HLS, Neuromorphic
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:su:diva-243069 (URN)10.1007/978-3-031-87995-1_12 (DOI)2-s2.0-105002874652 (Scopus ID)978-3-031-87994-4 (ISBN)978-3-031-87995-1 (ISBN)
Conference
1st International Symposium on Applied Reconfigurable Computing (ARC 2025), Sevilla, Spain, April 9-11, 2025
Available from: 2025-05-09 Created: 2025-05-09 Last updated: 2025-05-09Bibliographically approved
Al Hafiz, M. I., Ravichandran, N., Lansner, A., Herman, P. & Podobas, A. (2025). Embedded FPGA Acceleration of Brain-Like Neural Networks: Online Learning to Scalable Inference. In: Proceedings 2025 IEEE 18th International Symposium on Embedded Multicore/Many-core Systems-on-Chip MCSoC 2025: . Paper presented at 18th IEEE International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC 2025) (pp. 331-338). Piscataway: IEEE
Open this publication in new window or tab >>Embedded FPGA Acceleration of Brain-Like Neural Networks: Online Learning to Scalable Inference
Show others...
2025 (English)In: Proceedings 2025 IEEE 18th International Symposium on Embedded Multicore/Many-core Systems-on-Chip MCSoC 2025, Piscataway: IEEE, 2025, p. 331-338Conference paper, Published paper (Refereed)
Abstract [en]

Edge AI increasingly requires models that learn and adapt on-device under a tight energy budget. Mainstream deep learning models, while powerful, are often overparameterized, energy-hungry and dependent on cloud connectivity. Brain-Like Neural Networks (BLNNs), such as the Bayesian Confidence Propagation Neural Network (BCPNN), propose a neuromorphic alternative by mimicking cortical architecture and biologicallyconstrained learning. They offer sparse architectures with local learning rules and unsupervised/semi-supervised learning, making them well-suited for low-power edge intelligence. However, existing BCPNN implementations rely on GPUs or datacenter FPGAs. This work presents the first embedded FPGA accelerator for BCPNN on a Zynq UltraScale+ SoC (ZCU104) using High-Level Synthesis. We implement both online learning and inference-only kernels with configurable precision (FP32, FP16, and mixed FP16/FXP16). Evaluated on MNIST, Pneumonia, and Breast Cancer datasets, our accelerator delivers up to 17.55% lower latency and 94.1% energy savings over ARM baselines. Our work brings practical, brain-like online learning and scalable inference to edge devices.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2025
Series
IEEE International Symposium on Embedded Multicore Socs (MCSoC), ISSN 2771-3067, E-ISSN 2771-3075
Keywords
BCPNN, BLNN, Embedded, FPGA, HLS, Neuromorphic
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:su:diva-254007 (URN)10.1109/MCSoC67473.2025.00060 (DOI)2-s2.0-105032396875 (Scopus ID)979-8-3315-6571-8 (ISBN)979-8-3315-6572-5 (ISBN)
Conference
18th IEEE International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC 2025)
Available from: 2026-04-07 Created: 2026-04-07 Last updated: 2026-04-07Bibliographically approved
Chrysanthidis, N., Fiebig, F., Lansner, A. & Herman, P. (2025). Short-term plasticity influences episodic memory recall: an interplay of synaptic traces in a spiking neural network model. Scientific Reports, 15, Article ID 28164.
Open this publication in new window or tab >>Short-term plasticity influences episodic memory recall: an interplay of synaptic traces in a spiking neural network model
2025 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 15, article id 28164Article in journal (Refereed) Published
Abstract [en]

We investigated the interaction of episodic memory processes with the short-term dynamics of recency effects. This work takes inspiration from a seminal experimental work involving an odor-in-context association task conducted on rats. In the experimental task, rats were presented with odor pairs in two arenas serving as old or new contexts for specific odor items. Rats were rewarded for selecting the odor that was new to the current context. These new-in-context odor items were deliberately presented with higher recency relative to old-in-context items, so that episodic memory was put in conflict with a short-term recency effect. To study our hypothesis about the major role of synaptic interplay of plasticity phenomena on different time-scales in explaining rats’ performance in such episodic memory tasks, we built a computational spiking neural network model consisting of two reciprocally connected networks that stored contextual and odor information as stable distributed memory patterns. We simulated the experimental task resulting in a dynamic context-item coupling between the two networks by means of Bayesian–Hebbian plasticity with eligibility traces to account for reward-based learning. We first reproduced quantitatively and explained mechanistically the findings of the experimental study, and then to further differentiate the impact of short-term plasticity we simulated an alternative task with old-in-context items presented with higher recency, thus synergistically confounding episodic memory with effects of recency. Our model predicted that higher recency of old-in-context items enhances episodic memory by boosting the activations of old-in-context items. We argue that the model offers a computational framework for studying behavioral implications of the synaptic underpinning of different memory effects in experimental episodic memory paradigms.

Keywords
Attractor dynamics, Bayesian–Hebbian plasticity, Episodic memory, Recency, Spiking cortical memory model
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:su:diva-246622 (URN)10.1038/s41598-025-12611-5 (DOI)001542639300007 ()40750641 (PubMedID)2-s2.0-105012454086 (Scopus ID)
Available from: 2025-09-12 Created: 2025-09-12 Last updated: 2025-09-12Bibliographically approved
Ravichandran, N., Lansner, A. & Herman, P. (2025). Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks. Neurocomputing, 626, Article ID 129440.
Open this publication in new window or tab >>Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks
2025 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 626, article id 129440Article in journal (Refereed) Published
Abstract [en]

Neural networks that can capture key principles underlying brain computation offer exciting new opportunities for developing artificial intelligence and brain-like computing algorithms. Such networks remain biologically plausible while leveraging localized forms of synaptic learning rules and modular network architecture found in the neocortex. Compared to backprop-driven deep learning approches, they provide more suitable models for deployment of neuromorphic hardware and have greater potential for scalability on large-scale computing clusters. The development of such brain-like neural networks depends on having a learning procedure that can build effective internal representations from data. In this work, we introduce and evaluate a brain-like neural network model capable of unsupervised representation learning. It builds on the Bayesian Confidence Propagation Neural Network (BCPNN), which has earlier been implemented as abstract as well as biophysically detailed recurrent attractor neural networks explaining various cortical associative memory phenomena. Here we developed a feedforward BCPNN model to perform representation learning by incorporating a range of brain-like attributes derived from neocortical circuits such as cortical columns, divisive normalization, Hebbian synaptic plasticity, structural plasticity, sparse activity, and sparse patchy connectivity. The model was tested on a diverse set of popular machine learning benchmarks: grayscale images (MNIST, F-MNIST), RGB natural images (SVHN, CIFAR-10), QSAR (MUV, HIV), and malware detection (EMBER). The performance of the model when using a linear classifier to predict the class labels fared competitively with conventional multi-layer perceptrons and other state-of-the-art brain-like neural networks.

Keywords
Brain-like computing, Brain inspired, Neuroscience informed, Biologically plausible, Representation learning, Unsupervised learning, Hebbian plasticity, BCPNN structural plasticity, Cortical columns, Modular neural networks, Sparsity, Rewiring, Self-organization
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:su:diva-239806 (URN)10.1016/j.neucom.2025.129440 (DOI)001425064400001 ()2-s2.0-85217068343 (Scopus ID)
Available from: 2025-02-26 Created: 2025-02-26 Last updated: 2025-02-26Bibliographically approved
Ravichandran, N., Lansner, A. & Herman, P. (2024). Spiking representation learning for associative memories. Frontiers in Neuroscience, 18, Article ID 1439414.
Open this publication in new window or tab >>Spiking representation learning for associative memories
2024 (English)In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 18, article id 1439414Article in journal (Refereed) Published
Abstract [en]

Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain’s spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.

Keywords
associative memory, attractor dynamics, BCPNN, Hebbian learning, representation learning, spiking neural networks, structural plasticity, unsupervised learning
National Category
Neurology
Identifiers
urn:nbn:se:su:diva-238936 (URN)10.3389/fnins.2024.1439414 (DOI)001328684900001 ()2-s2.0-85205940985 (Scopus ID)
Available from: 2025-02-06 Created: 2025-02-06 Last updated: 2025-02-06Bibliographically approved
Ravichandran, N. B., Lansner, A. & Herman, P. (2023). Brain-like Combination of Feedforward and Recurrent Network Components Achieves Prototype Extraction and Robust Pattern Recognition. In: Giuseppe Nicosia; Varun Ojha; Emanuele La Malfa; Gabriele La Malfa; Panos Pardalos; Giuseppe Di Fatta; Giovanni Giuffrida; Renato Umeton (Ed.), Machine Learning, Optimization, and Data Science: 8th International Conference, LOD 2022, Certosa di Pontignano, Italy, September 18–22, 2022, Revised Selected Papers, Part II. Paper presented at 8th International Conference on Machine Learning, Optimization, and Data Science (LOD 2022), Certosa di Pontignano, Italy, September 18–22, 2022 (pp. 488-501). Springer
Open this publication in new window or tab >>Brain-like Combination of Feedforward and Recurrent Network Components Achieves Prototype Extraction and Robust Pattern Recognition
2023 (English)In: Machine Learning, Optimization, and Data Science: 8th International Conference, LOD 2022, Certosa di Pontignano, Italy, September 18–22, 2022, Revised Selected Papers, Part II / [ed] Giuseppe Nicosia; Varun Ojha; Emanuele La Malfa; Gabriele La Malfa; Panos Pardalos; Giuseppe Di Fatta; Giovanni Giuffrida; Renato Umeton, Springer, 2023, p. 488-501Conference paper, Published paper (Refereed)
Abstract [en]

Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13811
National Category
Computer Sciences
Identifiers
urn:nbn:se:su:diva-234415 (URN)10.1007/978-3-031-25891-6_37 (DOI)000995538200037 ()2-s2.0-85151048951 (Scopus ID)978-3-031-25890-9 (ISBN)978-3-031-25891-6 (ISBN)
Conference
8th International Conference on Machine Learning, Optimization, and Data Science (LOD 2022), Certosa di Pontignano, Italy, September 18–22, 2022
Available from: 2024-10-17 Created: 2024-10-17 Last updated: 2024-10-17Bibliographically approved
Jafarian, M., Huerta, D. C., Villani, G., Lansner, A. & Johansson, K. H. (2023). Cluster Synchronization as a Mechanism of Free Recall in Working Memory Networks. IEEE Open Journal of Control Systems, 2, 454-463
Open this publication in new window or tab >>Cluster Synchronization as a Mechanism of Free Recall in Working Memory Networks
Show others...
2023 (English)In: IEEE Open Journal of Control Systems, E-ISSN 2694-085X, Vol. 2, p. 454-463Article in journal (Refereed) Published
Abstract [en]

This article studies free recall, i.e., the reactivation of stored memory items, namely patterns, in any order, of a model of working memory. Our free recall model is based on a biologically plausible modular neural network composed of H modules, namely hypercolumns, each of which is a bundle of M minicolumns. The coupling weights and constant bias values of the network are determined by a Hebbian plasticity rule. Using techniques from nonlinear stability theory, we show that cluster synchronization is the central mechanism governing free recall of orthogonally encoded patterns. Particularly, we show that free recall's cluster synchronization is the combination of two main mechanisms: simultaneous activities of minicolumns representing an encoded pattern, i.e., within-pattern synchronization, together with time-divided activities of minicolumns representing different patterns. We characterize the coupling and bias value conditions under which cluster synchronization emerges. We also discuss the role of heterogeneous coupling weights and bias values of minicolumns' dynamics in free recall. Specifically, we compare the behaviour of two H×2 networks with identical and non-identical coupling weights and bias values. For these two networks, we obtain bounds on couplings and bias values under which both encoded patterns are recalled. Our analysis shows that having non-identical couplings and bias values for different patterns increases the possibility of their free recall. Numerical simulations are given to validate the theoretical analysis.

Keywords
Network analysis and control, stability of nonlinear systems, systems neuroscience
National Category
Computer Systems
Identifiers
urn:nbn:se:su:diva-249849 (URN)10.1109/ojcsys.2023.3328201 (DOI)001328436100002 ()2-s2.0-85208516044 (Scopus ID)
Available from: 2025-11-20 Created: 2025-11-20 Last updated: 2025-11-20Bibliographically approved
Lansner, A., Fiebig, F. & Herman, P. (2023). Fast Hebbian plasticity and working memory. Current Opinion in Neurobiology, 83, Article ID 102809.
Open this publication in new window or tab >>Fast Hebbian plasticity and working memory
2023 (English)In: Current Opinion in Neurobiology, ISSN 0959-4388, E-ISSN 1873-6882, Vol. 83, article id 102809Article in journal (Refereed) Published
Abstract [en]

Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism. The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review, we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.

National Category
Information Systems, Social aspects Neurosciences
Identifiers
urn:nbn:se:su:diva-225642 (URN)10.1016/j.conb.2023.102809 (DOI)001120031900001 ()37980802 (PubMedID)2-s2.0-85177603528 (Scopus ID)
Available from: 2024-01-31 Created: 2024-01-31 Last updated: 2024-01-31Bibliographically approved
Chrysanthidis, N., Fiebig, F., Lansner, A. & Herman, P. (2022). Traces of Semantization, from Episodic to Semantic Memory in a Spiking Cortical Network Model. eNeuro, 9(4), Article ID ENEURO.0062-22.2022.
Open this publication in new window or tab >>Traces of Semantization, from Episodic to Semantic Memory in a Spiking Cortical Network Model
2022 (English)In: eNeuro, E-ISSN 2373-2822, Vol. 9, no 4, article id ENEURO.0062-22.2022Article in journal (Refereed) Published
Abstract [en]

Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or “semantization,” which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know (R/K) behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian–Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian–Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing-dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints while also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes. 

Keywords
Bayesian–Hebbian plasticity, BCPNN, episodic memory, semantization, spiking cortical memory model, STDP, article, learning, memory, nerve cell plasticity, semantic memory, spike, spiking neural network
National Category
Neurosciences
Identifiers
urn:nbn:se:su:diva-212110 (URN)10.1523/ENEURO.0062-22.2022 (DOI)35803714 (PubMedID)2-s2.0-85138107120 (Scopus ID)
Available from: 2022-12-01 Created: 2022-12-01 Last updated: 2022-12-01Bibliographically approved
Wang, D., Xu, J., Stathis, D., Zhang, L., Li, F., Lansner, A., . . . Zou, Z. (2021). Mapping the BCPNN Learning Rule to a Memristor Model. Frontiers in Neuroscience, 15, Article ID 750458.
Open this publication in new window or tab >>Mapping the BCPNN Learning Rule to a Memristor Model
Show others...
2021 (English)In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 15, article id 750458Article in journal (Refereed) Published
Abstract [en]

The Bayesian Confidence Propagation Neural Network (BCPNN) has been implemented in a way that allows mapping to neural and synaptic processes in the human cortex and has been used extensively in detailed spiking models of cortical associative memory function and recently also for machine learning applications. In conventional digital implementations of BCPNN, the von Neumann bottleneck is a major challenge with synaptic storage and access to it as the dominant cost. The memristor is a non-volatile device ideal for artificial synapses that fuses computation and storage and thus fundamentally overcomes the von Neumann bottleneck. While the implementation of other neural networks like Spiking Neural Network (SNN) and even Convolutional Neural Network (CNN) on memristor has been studied, the implementation of BCPNN has not. In this paper, the BCPNN learning rule is mapped to a memristor model and implemented with a memristor-based architecture. The implementation of the BCPNN learning rule is a mixed-signal design with the main computation and storage happening in the analog domain. In particular, the nonlinear dopant drift phenomenon of the memristor is exploited to simulate the exponential decay of the synaptic state variables in the BCPNN learning rule. The consistency between the memristor-based solution and the BCPNN learning rule is simulated and verified in Matlab, with a correlation coefficient as high as 0.99. The analog circuit is designed and implemented in the SPICE simulation environment, demonstrating a good emulation effect for the BCPNN learning rule with a correlation coefficient as high as 0.98. This work focuses on demonstrating the feasibility of mapping the BCPNN learning rule to in-circuit computation in memristor. The feasibility of the memristor-based implementation is evaluated and validated in the paper, to pave the way for a more efficient BCPNN implementation, toward a real-time brain emulation engine.

Keywords
Bayesian Confidence Propagation Neural Network (BCPNN), learning rule, memristor, nonlinear dopant drift phenomenon, synaptic state update, spiking neural networks, analog neuromorphic hardware
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:su:diva-201277 (URN)10.3389/fnins.2021.750458 (DOI)000738679400001 ()34955716 (PubMedID)
Available from: 2022-01-24 Created: 2022-01-24 Last updated: 2022-02-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2358-7815

Search in DiVA

Show all publications