首先感谢CSDN平台,发现不是我一个人在SNN Mapping方面纠结着。去年看了Mapping方面的内容后感觉想创新还是有点难度的,毕竟优化就是生物进化算法类似的套路,可是你会发现自己实现的结果就是没人家论文的结果优秀。所以去年看了几篇后就不想整这个方向,但是你懂的,这样那样的原因一大堆,还是要我整Mapping。我想哭了,原因是这是一个苦力活,实验设置多,仿真数据多,算法优化变化多,和别人比较很难秀起来。关键是论文发的一般不行。
不过苦活累活总得有人做啊,作为一个经常表现十分老实的我自然是很好的苦工。其实我内心是拒绝的,不然我去年就整理了。2021年了,还是落在我头上了。不想说什么别的了,我的实话是我内心是拒绝的。我的回答依然是“那好吧”。anyway,今年开整这个。希望CSDN小伙伴们看到我写的不对的地方热心帮我指出来,不要让我走很多弯路哦。你们的热心帮助在拯救我的N多脑细胞和N多头发哦。Thanks♪(・ω・)ノ
目前我看到CSDN关于Mapping SNN的有:
1. Mapping Spiking Neural Networks onto a Manycore Neuromorphic Architecture
Lin C K , Wild A , Chinya G N , et al. Mapping spiking neural networks onto a manycore neuromorphic architecture[C]// Acm Sigplan Conference. ACM, 2018:78-89.
2. Optimized Mapping Spiking Neural Networks onto Network-on-Chip
4.A Cross-layer based mapping for spiking neural network onto network on chip
好像都是我关注的博主“嘀嗒一声小刺猬”,感谢他。
5.Mapping Spiking Neural Networks to Neuromorphic Hardware
A. Balaji et al., "Mapping Spiking Neural Networks to Neuromorphic Hardware," in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 1, pp. 76-86, Jan. 2020, doi: 10.1109/TVLSI.2019.2951493.
Neuromorphic hardware implements biological neurons and synapses to execute a spiking neural network (SNN)-based machine learning. We present SpiNeMap, a design methodology to map SNNs to crossbar-based neuromorphic hardware, minimizing spike latency and energy consumption. SpiNeMap operates in two steps: SpiNeCluster and SpiNePlacer. SpiNeCluster is a heuristic-based clustering technique to partition an SNN into clusters of synapses, where intracluster local synapses are mapped within crossbars of the hardware and intercluster global synapses are mapped to the shared interconnect. SpiNeCluster minimizes the number of spikes on global synapses, which reduces spike congestion and improves application performance. SpiNePlacer then finds the best placement of local and global synapses on the hardware using a metaheuristic-based approach to minimize energy consumption and spike latency. We evaluate SpiNeMap using synthetic and realistic SNNs on a state-of-the-art neuromorphic hardware. We show that SpiNeMap reduces average energy consumption by 45% and spike latency by 21%, compared to the best-performing SNN mapping technique.
6.Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware
Balaji, A., Marty, T., Das, A. et al. Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware. J Sign Process Syst 92, 1293–1302 (2020). https://doi.org/10.1007/s11265-020-01573-8
Balaji A , Marty T , Das A , et al. Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware[J]. arXiv e-prints, 2020.
Neuromorphic architectures implement biological neurons and synapses to execute machine learning algorithms with spiking neurons and bio-inspired learning algorithms. These architectures are energy efficient and therefore, suitable for cognitive information processing on resource and power-constrained environments, ones where sensor and edge nodes of internet-of-things (IoT) operate. To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch. In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps – step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
挑战:To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch.
方案:Our design methodology operates in two steps – step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture.
结果:We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
为什么crossbar NOC:A neuromorphic architecture is typically designed using crossbars, which can accommodate only a limited number of synapses per neuron to reduce energy consumption. To build a large neuromorphic chip, multiple crossbars are integrated using a shared interconnect such as network-on-chips (NoC)
通常mapping 方式:To map an SNN to these architectures, the common practice is to partition the neurons and synapses of the SNN into clusters and map these clusters to the crossbars, optimizing hardware performance such as minimizing the number of spikes communicated between crossbar, which reduces energy consumption。
Prior methods to partition and map an SNN to neuromorphic hardware, such as PSOPART [16], SpiNeMap [6], PyCARL [4], NEUTRAMS [25] and DFSynthesizer [42] are design-time approaches that require significant exploration time to generate a good solut