Skip to main content
The CCE-IOS team is helping to optimize I/O performance at scale on U.S. DOE HPC systems by proposing fine-grained parallel I/O and storage solutions. In collaboration with PPS, this team will design data models that map efficiently to memory constructs.

HEP experiments rely on file-based input/output (I/O) and storage to process hundreds of petabytes of data every year. The CCE Fine-Grained I/O and Storage (IOS) team is helping to optimize I/O performance at scale on U.S. Department of Energy high-performance computing systems by proposing fine-grained parallel I/O and storage solutions. In collaboration with PPS, this team will design data models that map efficiently to memory constructs.

Measuring performance of ROOT I/O in HEP workflows on HPC systems: ROOT [1] I/O is central to all HEP experiments, and it is essential that when porting HEP workflows to HPC, that ROOT I/O is efficient. The IOS project uses Darshan [2], a scalable HPC I/O characterization tool to profile different experimental workflows’ I/O behavior and gather valuable insights for possible improvements on HPC platforms. As an early result of this work, improvements to Darshan itself were made to cover different aspects of HEP computing, such as fork safety.
[1] https://​root​.cern​.ch/
[2] https://​www​.mcs​.anl​.gov/​r​e​s​e​a​r​c​h​/​p​r​o​j​e​c​t​s​/​d​a​r​shan/ 

Investigate HDF5 as intermediate event storage for HPC processing: Many HEP experiments use ROOT [1] for data storage. ROOT has served the needs of HEP experiments’ High Throughput Computing (HTC) well, but is less tailored for HPC, where other file formats, such as HDF5 [2] are more common. HDF5 may lack the complete data model support of ROOT, but has advantages for parallel I/O. These could be beneficial for some workflows such as the ATLAS EventService, where data is written from a large number of processes into small, intermediate files. The IOS team is developing a prototype of storing ROOT serialized data in HDF5 allowing for many processes to write to the same file in parallel.
[1] https://​root​.cern​.ch/
[2] https://​www​.hdf​group​.org/​s​o​l​u​t​i​o​n​s​/hdf5

Mimicking framework for understanding scalability and performance of HEP output methods: A framework has been developed to simulate HEP output of specific data products (e.g., RECO, AOD, miniAOD) for different experiments (CMS, DUNE, ATLAS) and different scenarios. This framework allows deeper analysis of intermediate data storage options, without having to modify the complex experiment-specific frameworks. It also enables exercising I/O for data rates that are not yet accessible by the experiments in order to find hidden bottlenecks. One such bottleneck was found in ROOT [1] serialization and has been fixed [2].
[1] https://​root​.cern​.ch/
[2] https://​github​.com/​r​o​o​t​-​p​r​o​j​e​c​t​/​r​o​o​t​/​p​u​l​l​/6062