Thomas Zeiser

Some comments by Thomas Zeiser about HPC@RRZE and other things

Content

MpCCI auf SGI Altix (3)

MpCCI auf SGI Altix (3)

Tests to use the client-server mode with SGI MPT … — a never ending tragedy:

  • link your application with: ccilink -client -nompilink ..... -lmpi
  • produce the procgroup file with ccirun -server -norun ex2.cci
  • run your simulation:
    /opt/MpCCI/mpiexec/bin/mpiexec -server &
    /opt/MpCCI/mpiexec/bin/mpiexec -config=ccirun_server.procgroup >& s & x1 & x2 < /dev/null
    sleep 10
    
  • The number of CPUs requested must be at least as large as the number of server processes, i.e. those started with mpiexec.

    If you use mpich instead of MPT all processes have to be started with mpiexec. As a consequence, the number of requested CPUs must be equal to the total number of processes. The PBS-TM interface used by mpiexec does not allow to overbook CPUs.

    … after many hours of testing: IT SEEMS THAT MpCCI WITH SGI-MPT DOES *NOT* WORK RELIABLY AT ALL … mpi_comm_rank==0 on all processes 🙂 despite using the correct mpif.h files and mpiruns for the server and client applications.

    My current conclusions:

    • MpCCI does not support SGI MPT natively
    • using mpich on SGI Altix for all communications is NO option as benchmarks showed that CFD applications are slower by a factor of 2 or more when using mpich instead of MPT
    • using MpCCI in client-server mode also seems not to work (see above)

    That means, MpCCI is not at all usable on SGI Altix. Sorry for those who rely on it.