Massively Parallel Computations using Parallel Index Set in C++
In many parallel scientific codes we do not deal with distributed objects but rather with distributed containers. These codes exhibit recurring communication schemes that only send some entries of the container to other cores. The communicated type might change in the algorithm while the scheme stays the same. Often efficient sequential routines exist that developers want and should reuse in their parallel algorithm.
In this session we present a parallel communication library based on the message passing standard (MPI) to address these needs. It makes purely sequential containers usable in parallel algorithms by imposing a global index mapping onto sequential containers together with a partitioning into parts on by the core and ghost entries owned by other cores. Based on the mapping communication schemes can be precomputed and reused for different types using generic programming.
We will show scalability tests of real-world simulation codes adressing more than 100,000,000,000 container entries on more than 200,000 cores. Furthermore we will talk about how to make use of these results already on current multi-core server and small clusters of them.
Speaker: Markus Blatt