Skip to main content
This project seeks to enable the ongoing MPI-related research and development work at Argonne, with the overall goal of enabling MPICH and its derivatives to run effectively at exascale.

While  MPI is a viable programming model at exascale, both the MPI standard and MPI implementations have to address the challenges posed by the increased scale, performance characteristics, and evolving architectural features expected in exascale systems, as well as the capabilities and requirements of applications targeted at these systems.

The key challenges are fourfold:(1) interoperability with intranode programming models having a high thread count (such as OpenMP, OpenACC, and emerging asynchronous task models); (2) scalability and performance over complex architectures (including high core counts, processor heterogeneity, and heterogeneous memory); (3) software overheads that are exacerbated by lightweight cores and low-latency networks; (4) enhanced functionality (extensions to the MPI standard) based on experience with applications and high-level libraries/frameworks targeted at exascale; and (5) topics that become more significant as we move to the next generation of HPC architectures: memory usage, power, and resilience.

This project seeks to enable the ongoing MPI-related research and development work at Argonne, with the overall goal of enabling MPICH and its derivatives to run effectively at exascale. Specific goals of this proposal fall into four categories:

  1. Improvements to the MPICH implementation. Support efficiently features in current and future versions of the MPI standard for exascale architectures and applications.

  2. Improvements to the MPI standard. Enhance the MPI standard through the MPI Forum by leading or participating in its subcommittees in order to ensure that the standard continues to meet the evolving needs of applications, libraries, and higher-level languages.

  3. Interaction with the developers of applications and high-level libraries/languages. Work with ECP applications and other software systems to use MPI in the most efficient manner.

  4. Interaction with vendors and computing facilities involved in DOE supercomputer acquisitions.