Skip to main content
Seminar | X-Ray Science

Exploring Energy-Efficiency in Neural Systems with Spike-based Machine Intelligence

Applied AI Seminar Series

Abstract: Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning due to their huge energy efficiency benefits on neuromorphic hardware.

In this presentation, I will talk about important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy and even robustness. We will first delve into a recently proposed method Batch Normalization Through Time (BNTT) that allows us to train SNNs from scratch with very low latency and enables us to target interesting applications like video segmentation and beyond traditional learning scenarios, like federated training. Then, I will discuss novel architectures with temporal feedback connections discovered by SNNs by using neural architecture search that further lower latency, improve energy efficiency, and point to interesting temporal effects. Finally, I will delve into the hardware perspective of SNNs when implemented on standard CMOS and compute-in-memory accelerators with our recently proposed SATA and SpikeSim tools. It turns out that the multiple timestep computation in SNNs can lead to extra memory overhead and repeated DRAM access that annuls all the compute-sparsity related advantages. I will highlight some techniques such as, membrane-potential sharing, early time-step exit that use the temporal dimension in SNNs to reduce the overhead.

Bio: Priya Panda is an assistant professor in the electrical engineering department at Yale University, USA. She received her B.E. and Master’s degree from BITS, Pilani, India and her PhD from Purdue University. During her PhD, she interned in Intel Labs where she developed large scale spiking neural network algorithms for benchmarking the Loihi chip. She is the recipient of the 2019 Amazon Research Award, 2022 Google Research Scholar Award, 2022 DARPA Riser Award. Her research interests lie in Neuromorphic Computing, energy-efficient accelerators, and in-memory processing.