Skip to main content
Seminar | Mathematics and Computer Science

Scalable High-Performance Analysis of Scientific Data

CS Seminar

Abstract: My team studies the use of supercomputers, besides their traditional role of simulation and modeling, for the analysis and visualization of scientific data. Our strategy is three-tiered: to develop scalable software infrastructure, to build scalable algorithms on this foundation, and to engage with applications to drive further development. In this talk, I’ll present a high-level overview of our solutions for the rapid development of scalable analysis algorithms and coupling them into in situ workflows, before diving into one research area in detail: multivariate functional approximation, or MFA.

MFA redefines scientific datasets in a functional form by converting raw discrete data into a hypervolume of piecewise-continuous functions. The MFA model can represent numerous types of data because it is agnostic to the mesh, field, or discretization of the input dataset. Compared with existing discrete data models, the MFA model can enable many spatiotemporal analyses, without converting the entire dataset back to the original discrete form. Post hoc, the MFA enables analytical closed-form evaluation of points, derivatives, and integrals, to high order, anywhere inside the domain, without being limited to the locations of the input data points.