Skip to main content
Article | Mathematics and Computer Science

Balaprakash gives presentations on machine learning for high-performance computing

Prasanna Balaprakash, a computer scientist in Argonne’s Mathematics and Computer Science Division with a joint appointment in the Argonne Leadership Computing Facility (ALCF), recently gave presentations on machine learning at two workshops.

Balaprakash gave a keynote presentation at the first international High Performance Computing and Machine Learning (HPCaML) workshop in Washington, D.C., February 16. This new workshop brought together researchers at the intersection of the two domains – high-performance computing and machine learning – to share knowledge about advances made and challenges remaining in hardware, algorithms, and performance models.  Balaprakash’s keynote address, titled Machine-Learning-Based Performance Modeling and Tuning for High-Performance Computing,” focused on how emerging technologies – heterogeneous nodes, deep memory hierarchies, many-core processors – are necessitating innovative strategies to manage huge and dynamic scientific applications. 

Arguably, machine learning has long been applied effectively to complex problems such as image classification, speech recognition, and game playing. Only recently, however, has machine learning attracted the attention of scientific computing applications. Indeed, in the past few years the U.S. Department of Energy has sponsored workshops and designated funding to promote the development of machine learning strategies and technologies and their application in fields such as geothermal energy, medicine, and the national grid.

Traditionally, expert-knowledge-based analytical performance models are used to tune and improve the overall performance of applications on high performance computing systems,” Balaprakash said.  But these models are becoming ineffective because of increasing hardware specializations, system software complexity, and rapidly evolving application codes. Models based on machine learning are efficient here, but they usually require a large amount of training data, which are expensive, hard or impossible to come by.  A key question, then, is how to develop and leverage data-efficient machine learning methods to automate this modeling and tuning processes, while preserving portability and scalability on extreme-scale systems.”

In his presentation, Balaprakash described the work he and his colleagues are doing at Argonne to address this question.  He presented examples of new automated data-driven performance models, transfer learning, and machine-learning-based autotuning search.

In the Optimization, Modeling, Analysis and Space Exploration (OMASE) workshop on February 17, Balaprakash gave the opening presentation, titled Machine-Learning-Based Search for Automatic Performance Tuning.”  Here he focused on how machine learning can be used to find high-performing code variants for automatic performance tuning. 

Optimizing computer codes by using autotuners can be prohibitively expensive, requiring evaluating a large number of code variants. Search methods based on machine learning can help overcome this hurdle,” Balaprakash explained.  In his talk, he discussed the unique challenges of autotuning problems from mathematical optimization perspective. He then presented an efficient machine-learning-based search algorithm that samples a small number of input parameter configurations and progressively fits a surrogate model over the input-output space until completing the user-defined maximum number of evaluations.

These two workshops gave attendees an opportunity not only to hear about advances in machine learning methods but also to share their ideas about new learning strategies for optimizing and tuning application performance automatically,” Balaprakash said.
 

For further information about the HPCaML 2019 workshop, see the website:

http://​hpc​.pnl​.gov/​h​p​c​a​ml19/.

For further information about the OMASEW workshop, see the website:

https://​omasew​.github​.io/.