Skip to main content
Colloquium | Materials Science

Addressing the Computational Demands of Machine Learning

Microelectronics Colloquium Series

Abstract: The proliferation of machine learning workloads across the entire computing spectrum, from data centers to mobile, wearable and Internet-of-Things (IoT) devices, is driven by the need to organize, analyze, interpret, and search through exploding amounts of data, as well as the need to understand and interact more intelligently with users and the environment. The past decade has seen tremendous developments in deep neural networks – a class of machine learning algorithms - and remarkable growth in their practical deployment.

In this talk, I will first present a quantitative analysis of the computational requirements of deep neural networks. The analysis highlights a large gap between the capabilities of current computing systems and the requirements posed by these workloads. This gap will only grow due to the seemingly insatiable appetite of these applications, together with diminishing benefits from technology scaling. I will then outline a roadmap of technologies that can help bridge this gap, comprising accelerators for machine learning to approximate computing, in-memory computing and neuromorphic computing. 

Bio: Anand Raghunathan is the Silicon Valley Professor and Chair of the VLSI area in the School of Electrical and Computer Engineering at Purdue University, where he serves as Associate Director of the SRC/DARPA Center for Brain-inspired Computing (C-BRIC) and founding co-director of the Purdue/TSMC Center for a Secured Microelectronics Ecosystem (CSME). Raghunathan received the B. Tech. degree from the Indian Institute of Technology, Madras, and the M.A. and Ph.D. degrees from Princeton University.