TY - JOUR
T1 - SNAVA—A real-time multi-FPGA multi-model spiking neural network simulation architecture
AU - Sripad, Athul
AU - Sanchez, Giovanny
AU - Zapata, Mireya
AU - Pirrone, Vito
AU - Dorta, Taho
AU - Cambria, Salvatore
AU - Marti, Albert
AU - Krishnamourthy, Karthikeyan
AU - Madrenas, Jordi
N1 - Publisher Copyright:
© 2017 Elsevier Ltd
PY - 2018/1
Y1 - 2018/1
N2 - Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities.
AB - Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities.
KW - Digital neural simulation
KW - FPGA
KW - Neuromorphic systems
KW - SNNs
UR - http://www.scopus.com/inward/record.url?scp=85031793480&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2017.09.011
DO - 10.1016/j.neunet.2017.09.011
M3 - Artículo
C2 - 29054036
AN - SCOPUS:85031793480
SN - 0893-6080
VL - 97
SP - 28
EP - 45
JO - Neural Networks
JF - Neural Networks
ER -