Authors
Shao-Qun Zhang, Jia-Yi Chen, Jin-Hui Wu, Gao Zhang, Huan Xiong, Bin Gu, Zhi-Hua Zhou
Publication date
2024
Journal
Journal of Machine Learning Research
Volume
25
Issue
194
Pages
1-74
Description
Recent years have emerged a surge of interest in spiking neural networks (SNNs). The performance of SNNs hinges not only on searching apposite architectures and connection weights, similar to conventional artificial neural networks, but also on the meticulous configuration of their intrinsic structures. However, there has been a dearth of comprehensive studies examining the impact of intrinsic structures; thus developers often feel challenging to apply a standardized configuration of SNNs across diverse datasets or tasks. This work delves deep into the intrinsic structures of SNNs. Initially, we draw two key conclusions:(1) the membrane time hyper-parameter is intimately linked to the eigenvalues of the integration operation, dictating the functional topology of spiking dynamics;(2) various hyper-parameters of the firing-reset mechanism govern the overall firing capacity of an SNN, mitigating the injection ratio or sampling density of input data. These findings elucidate why the efficacy of SNNs hinges heavily on the configuration of intrinsic structures and lead to a recommendation that enhancing the adaptability of these structures contributes to improving the overall performance and applicability of SNNs. Inspired by this recognition, we propose two feasible approaches to enhance SNN learning, involving developing self-connection architectures and stochastic spiking neurons to augment the adaptability of the integration operation and firing-reset mechanism, respectively. We theoretically prove that (1) both methods promote the expressive property for universal approximation,(2) the incorporation of self-connection architectures fosters ample …
Scholar articles
SQ Zhang, JY Chen, JH Wu, G Zhang, H Xiong, B Gu… - Journal of Machine Learning Research, 2024