Skip to main content
Colloquium | Materials Science

Neuromorphic Computing for Machine Learning

Microelectronics Colloquium

Abstract: The data processing capability of the brain has fascinated scientists for many decades. To build a machine as compact as the brain with the same powerful capabilities is still elusive. It was a long way from a simple mathematical neuron model, McCulloch & Pitts (1943), to the realization that a multilayer network could be trained or learn. Although the underlaying algorithm, back-propagation, was discovered by Linnainma (1970) and first implemented by Werbos (1974), it took another forty years to show its full potential, Krizhevsky, I. Sutskever and G. Hinton (2012).

The take-off of machine learning is associated with the availability of large datasets and the advancement in compute-hardware. Neuromorphic computation is synonym for replacing the core digital operation for vector matrix multiplication with an operation across an array of analog resistors, that represents the weight matrix, followed by a nonlinear activation function to capture a thresholding neuron. To mimic biologic systems even closer, spiking neural networks with leak-integrate-fire neurons are generating some interest in the academic community.

In this presentation, we will give a critical review of spiking and frame-based neural network operation and then focus on the potential and requirements of frame-based neuromorphic computation using arrays of analog resistive elements. We will show that for a successful implementation of this technology there is a close interaction between material, algorithm, architecture, and application.