Thomas Zeiser

Some comments by Thomas Zeiser about HPC@RRZE and other things

Content

Using NEC SX-8 at HLRS

The NEC SX-8 is a nice vector machine. The compilers produce a lot of compile time information concerning optimization and vectorization.

Some unsorted notes to make life easier…

  • NEC SX series uses big endian, the “fronend system”, an Itanium2 system (NEC TX-7) however little endian. Fortran programmers have several options for automatic conversion of unformated data:
    • on the IA64 side (for executables generated with the Intel compiler): setenv F_UFMTENDIAN=big makes your program read/write big endian data
    • on the NEC SX side: setenv F_UFMTENDIAN u[,u...] to read/write little endian data; u[,u,...] are the units to convert.
  • by default, the NEC SX-8 Fortran runtime library does not read/write unformated records which are larger than 2 GB. setenv F_EXPRCW u,[u,...] enables processing of file records that are over 2GB in length. When this environment variable is specified, both record control words and End of Record markers are expanded to 8-byte quantities. F_EXPRCW and F_UFMTENDIAN cannot be used for the same unit.

MpCCI auf SGI Altix (3)

MpCCI auf SGI Altix (3)

Tests to use the client-server mode with SGI MPT … — a never ending tragedy:

  • link your application with: ccilink -client -nompilink ..... -lmpi
  • produce the procgroup file with ccirun -server -norun ex2.cci
  • run your simulation:
    /opt/MpCCI/mpiexec/bin/mpiexec -server &
    /opt/MpCCI/mpiexec/bin/mpiexec -config=ccirun_server.procgroup >& s & x1 & x2 < /dev/null
    sleep 10
    
  • The number of CPUs requested must be at least as large as the number of server processes, i.e. those started with mpiexec.

    If you use mpich instead of MPT all processes have to be started with mpiexec. As a consequence, the number of requested CPUs must be equal to the total number of processes. The PBS-TM interface used by mpiexec does not allow to overbook CPUs.

    … after many hours of testing: IT SEEMS THAT MpCCI WITH SGI-MPT DOES *NOT* WORK RELIABLY AT ALL … mpi_comm_rank==0 on all processes 🙂 despite using the correct mpif.h files and mpiruns for the server and client applications.

    My current conclusions:

    • MpCCI does not support SGI MPT natively
    • using mpich on SGI Altix for all communications is NO option as benchmarks showed that CFD applications are slower by a factor of 2 or more when using mpich instead of MPT
    • using MpCCI in client-server mode also seems not to work (see above)

    That means, MpCCI is not at all usable on SGI Altix. Sorry for those who rely on it.