Thomas Zeiser

Some comments by Thomas Zeiser about HPC@RRZE and other things

Content

Early usage of Emmy

NOTE

This article is no longer updated as Emmy is in the meantime in regular production.

Please check the official documentation available at http://www.rrze.uni-erlangen.de/dienste/arbeiten-rechnen/hpc/systeme/emmy-cluster.shtml

ORIGINAL (UNMAINTAINED) ARTICLE

RRZE’s new HPC system “Emmy” is open to the very first early adopters …

Measured LINPACK performance: 191 TFlop/s (using the CPUs of all 560 nodes and enabling Turbo Mode) – #210 in the Top500 of November 2013!

Hardware:

  • 544 regular compute nodes (e[01]XXX), each with
    • 2 sockets with Intel “Ivy Bridge” 10-core CPUs (Intel Xeon E5-2660 v2 @ 2.20GHz), i.e. 20 physical cores per node showing up as 40 virtual cores thanks to SMT
    • 64 GB main memory (approx. 60 GB usable; DDR3-1600)
    • QDR Infiniband
    • no local harddisk
  • 16 nodes with accelerators (e1[01]6X)
    • 2 sockets with Intel “Ivy Bridge” 10-core CPUs (Intel Xeon E5-2660 v2 @ 2.20GHz), i.e. 20 physical cores per node showing up as 40 virtual cores thanks to SMT
    • 64 GB main memory (DDR3-1866)
    • QDR Infiniband
    • 1 TB local harddisk
    • either one or two Intel Xeon Phi “MIC” or NVidia Keppler K20m
  • 2 login nodes “emmy.rrze.uni-erlangen.de”
  • parallel file system (approx. 440 TB capacity)

Login nodes:

  • SSH to emmy.rrze.uni-erlangen.de to compile and submit jobs.
  • all usual HPC file systems are available; Emmy has a new parallel file system which is only available on the new system and nowhere else.

Job submission:

  • Job submission is possible from the login nodes only as well as from cshpc (using qsub.emmy in the latter case).
  • ppn=40 is mandatory; other limits are RRZE-typical (e.g. 1h for devel and 24h for work).
  • Single node jobs are unwanted on the new system.
  • specific node types can be requested using their properties :ddr1600 (544 nodes qualify), :ddr1866 (16 nodes qualify), :k20m (one or two NVidia Keppler cards; 10 nodes qualify), k20m1x (one NVidia Keppler cards; 4 nodes qualify), k20m2x (two NVidia Keppler cards; 6 nodes qualify), phi (one or two Xeon Phi “MIC”, 10 nodes qualify), phi1x (one Xeon Phi, 4 nodes qualify), phi2x (two Xeon Phi, 6 nodes qualify) — see below for special notes on the Xeon Phi nodes!
  • specific clock frequencies can be requested; supported are: turbo,noturbo,f2.2,f2.1,f2.0,f1.9,f1.8,f1.7,f1.6,f1.5,f1.4,f1.3,f1.2
    by default, “ondemand” going up to turbo (typically 2.6 GHz) is used.
  • The typical web pages showing the system status/utilization are available at http://hpc.rrze.fau.de/kundenbereich/queues-emmy.shtml

Software/Modules/Running MPI jobs:

  • All users have BASH as login shell. No exceptions.
  • All major software and libraries should be available; most software is available in the same versions also on LiMa since the last update.
  • Intel’s mpiexec.hydra was broken for quite some time (i.e. several versions); see https://blogs.fau.de/zeiser/2013/11/28/perhost-option-in-intels-mpiexec-hydra-is-broken/. Typical users should go with RRZE’s mpirun_rrze.
  • If an intelmpi module is loaded, mpirun_rrze is available in the search path. It learned a new pinning option for pure-MPI binaries: “-pinexpr EXPR” where EPXR can be any syntax likwid-pin understands, e.g. S0:0-9@S1:0-9 to select ten cores on socket 0 and socket 1.
  • Intel MPI is recommended, but OpenMPI is available, too.
  • Sample job scripts for Ansys/CFX and STAR-CCM+ are available in /apss/Ansys/ and /apps/STAR-CCM+/ (visible only to those who are eligible for these commercial software packages). For Ansys/CFX there is a preview of version 15.0 available which should scale much better than version 14.5. Version 15.0pre3 right now uses a separate license (expired on Oct. 17th) which must only be used for benchmarking.

Current limitations: (last updates 2013-11-22)

  • Every HPC user can log into the login nodes of Emmy; but only specifically enabled accounts (“early adopters”) can submit jobs. Early adopters are by invitation only.
  • Jobs cannot be submitted from cshpc due to incompatibilities of PBS versions.
  • Everyone can only see his/her own jobs in the queue (and not also the jobs of other members of the group).
  • If there are other jobs, a single user cannot get more than 192 nodes or 48 running jobs. If there are no other jobs, up to 384 nodes or 96 running jobs are possible. (updated limits as of 2013-10-07)
  • pbsdsh (and other software which uses the PBS-TM API, e.g. mpiexec.pbs) does not work reliably.
  • if mpirun_rrze within a batch job fails with “… Too many logins …”, contact hpc@rrze and try for now to add a “sleep 5” before calling mpirun. (2013-11-05)
  • Runtime of jobs cannot be extended by sysadmins. Jobs are always killed after their initial walltime request is over. Might be fixed as of 2013-10-02.
  • Directories on the parallel file system ($FASTTMP = /elxfs) are not yet created automatically. Contact RRZE to get a directory.
  • Xeon Phi “MIC” not usable at all so far. Xeon Phis should be usable since 2013-10-25 – configuration is still improving; some additional remarks:
    • TODO: cross-compilation for Xeon Phi does NOT work yet (as of 2013-10-25) on the login nodes; request a phi nodes and compile there for now.
    • TODO: users are not yet regularly synchronized to the Xeon Phis (has to be done manually by the admins by running /root/micpw.sh on the management node)
    • Hostbased SSH should work now (2013-10-25) also for Xeon Phi similarly as for the regular compute nodes (i.e. from with the compute nodes but not from the login nodes), thus, there should be no need to have password-less SSH keys around
    • offload-mode has not been tested yet; volunteers?
    • ATTENTION as of 2013-10-25: for Intel MPI between Host and Phi or between Phis, explicitly use I_MPI_DEVICE=shm:dapl
      shm:ofa
      behaves strange (i.e. fails unless I_MPI_OFA_ADAPTER_NAME=mlx4_0 is set which is now done automatically; and with set I_MPI_OFA_ADAPTER_NAME=mlx4_0 performance between Host and Phi is not really good)
    • NOTE as of 2013-10-25: I_MPI_OFA_ADAPTER_NAME=mlx4_0 is required on nodes with Xeon Phi for communication between the normal compute nodes; otherwise Intel MPI does some mess (i.e. uses the wrong device). I_MPI_OFA_ADAPTER_NAME=mlx4_0 is now set automatically by the intelmpi module file. This setting hopefully does not have any drawback for normal jobs.
    • Jobs with property :phi / :phi1x / :phi2x do not start automatically. Contact hpc@rrze (2013-11-18)
  • The system is moving pre-production to regular operation. Thus still expect outages without announcements. But nevertheless, always report strange behaviour to hpc@rrze.fau.de!