Thomas Zeiser

Some comments by Thomas Zeiser about HPC@RRZE and other things

Content

Running Turobomole 5.10 in parallel on RRZE’s clusters

Recent versions of Turbomole come bundled with HP-MPI. This allows using different types of network interconnects for parallel runs and HP-MPI is supposed to select the fastest one available – but sometimes HP-MPI requires manual intervention to select to correct interconnect … As parallel runs of Turbomole also require one control process, additional intervention is required to avoid overbooking of the CPUs.

Here is a short receipt for RRZE’s clusters:

[shell]
#!/bin/bash -l
#PBS -lnodes=4:ppn=4,walltime=12:00:00
#PBS -q opteron
#… further PBS options as desired

module load turbomole-parallel/5.10

# shorten node list by one
. /apps/rrze/bin/chop_nodelist.sh

# prevent IB initialization attempts if IB is not available
ifconfig ib0>& /dev/null
if [ $? -ne 0 ]; then
export MPIRUN_OPTIONS=”-v -prot -TCP”
fi

# now call your TM binaries (jobx or whatever)
[/shell]

manually setting the TM architecture: As the s2 queue of the Transtec cluster has access to 32 and 64-bit nodes since recently, it may be a good idea to restrict the Turbomole binary to the 32-bit version which can run on both types of nodes by including export TURBOMOLE_SYSNAME="i786-pc-linux" in the PBS job script.

pinning with HP-MPI: As Turbomole internally uses HP-MPI, the usual pinning mechanisms of RRZE’s mpirun cannot be used with Turbomole. If you want to use pinning, you have to check the HP-MPI documentation and use their specific command line or environment variables! Cf. page 16/17 of /apps/HP-MPI/latest/doc/hp-mpi.02.02.rn.pdf.