KONWIHR

Kompetenznetzwerk für wissenschaftliches Höchstleistungsrechnen in Bayern

Inhalt

Scalable Data and Visualisation Output Strategies within the WaLBerla-framework

Antragssteller

Prof. Dr. Ulrich Rüde
Lehrstuhl für Systemsimulation (Informatik 10)
FAU Erlangen-Nürnberg

Projektbericht

The dimensions of self-propelled swimmers range within the micrometre scale. As a consequence, their motion is associated with low Reynolds numbers, i.e. Stokes flow. Considering a swarm of such self-propelled swimmers, e.g. a swarm of E. Coli bacteria, the hydrodynamic interactions and collisions of individual swimmers involving near field hydrodynamics are the two fundamentals resulting in collective swarming behaviour. However, the latter is still insufficiently investigated. In this context, simulations represent a software challenge but also a promising tool since they enable to model aspects not covered by analytical theory, e.g., interactions with surrounding boundaries, colliding swimmers, or swimmers beyond the Stokes regime. In order to observe the desired swarm-like behaviour, large scale simulations are required in a multiphysics setting. In our case, the latter is given by coupling the fluid dynamics framework waLBerla (widely applicable lattice Boltzmann from Erlangen) with the rigid body simulation framework pe. Both software packages already feature massively parallel implementations but in order to systematically evaluate and visualise the output data of a particulate flow simulation improvements and extensions of the data and visualisation output are required, which will be developed and implemented during the proposed project. As an application especially the systematic evaluation and visualisation of the emerging collective dynamic behaviour of swimmers should be investigated but also other existing cooperations will substantially benefit from the new output strategies: simulations of pore fluid flow and fine sediment infiltration into the riverbed in cooperation with the Institute of Hydraulic Engineering and Water Resources Management of the RWTH Aachen, simulations of self-propelled Janus particles in cooperation with the Institute for Computational Physics of the Universität Stuttgart, simulations of fluidised beds in cooperation with the Department of Chemical Engineering at IIT Delhi.

 

MPI IO for swimmer output

We implemented an output routine based on the parallel VTK file format 1 . To allow for scalable output our routine creates a single output file and uses MPI-IO calls which are efficiently mapped to the parallel file systems on modern supercomputers. Also arbitrary data like the swimmer’s linear or rotational velocity can be added to the output and used for the visualization later.

Advanced velocity field output strategies

In this part of the project, a memory- and computation efficient output strategy for the velocity field has been developed. As waLBerla makes use of the lattice Boltzmann method to simulate the fluid flow inside the domain and between the self-propelled swimmers, the fluid velocity, u, is not directly available but has to be computed from the probability distribution functions (PDFs) via their first moment. We implemented a so-called velocity adapter following the Adapter design pattern in C++. It allows to use the field, that stores the PDF values, as if it would be the velocity field by computing and returning the velocity inside a certain cell only when accessed. This avoids allocation of additional memory for an auxiliary velocity field and only computes the velocity values that are really needed.

 

Checkpointing strategy for particle simulations

The possibility to checkpoint and restart a simulations has many benefits especially in a HPC environment. This feature enables the possibility to save a certain state of the simulation and continue from this point on without redoing the work needed to reach the checkpointed state. This is helpful in many scenarios. If some rare error occures, one can start investigating near the actual point of error. If there is a hardware failure one can restart without loosing all the
invested compute time. One can also try out different simulation parameters at an advanced state of the simulation. And finally there is the possibility to split a simulation run in many sequential runs. Our implementation of this feature first identifies the elements which need to be stored for later use. It is very important to only store necessary information to save storage capacity. In the context of the pe this is the current status of all particles. Shadow particles
(ghost particles) are not saved since they can be reconstructed from the master particles. This significantly reduces the amount of particles that need to be saved. Parameters like solver settings do not have to be stored either since they normally do not change frequently and can thus be reconstructed from the initial state. In the case of dynamic load balancing and domain refinement also the current domain partitioning has to be saved. The saving mechanism is
a two step process. In the first step all processes serialize their information. In the second step the serialized information is written to a file using MPIIO. For the serialization process the same routines are used which are also used to pack particles into MPI buffers during the synchronization algorithm. This minimizes code duplication and possible errors. The loading process works the other way round. First all general information is reconstructed from the
normal initialization then the stored information is loaded from file via MPIIO and all data is de-serialized. Afterwards a synchronization is necessary to reconstruct all shadow (ghost) particles. Now the simulation can continue from the saved state. We tested our implementation with various scenarios from our daily work.

Initialization of dense swarms

In order to define complex initial conditions, like densely packed random ensembles of swimmers, the text-based configuration system of waLBerla was extended to make use of the Python scripting language. This allows the user to flexibly compute initial positions for the swimmers, potentially also using elementary geometric subroutines from third-party Python modules. Optionally, the user may also access the swimmer positions at runtime through the Python
interface, to scan for interesting events to trigger a spatially restricted VTK output.