Thomas Zeiser

Some comments by Thomas Zeiser about HPC@RRZE and other things

Content

Xserver/VirtualGL on NVidia K20m GPGPUs

As delivered, our NVidia K20m GPGPUs were in GPU Operation Mode 1/COMPUTE. If one tries to start the Xorg server, /var/log/Xorg.0.log shows:

    [   134.516] (II) NVIDIA: Using 4095.62 MB of virtual memory for indirect memory
    [   134.516] (II) NVIDIA:     access.
    [   134.523] (EE) NVIDIA(0): Failed to allocate 2D engine
    [   134.523] (EE) NVIDIA(0):  *** Aborting ***
    [   134.523] (EE) NVIDIA(0): Failed to allocate 2D objects
    [   134.523] (EE) NVIDIA(0):  *** Aborting ***
    [   134.619] Fatal server error:
    [   134.619] AddScreen/ScreenInit failed for driver 0

Using nvidia-smi --gom=0 and rebooting the node, the GPU Operation Mode can be set to 0/ALL_ON. Now, starting the Xorg server succeeds (and VirtualGL should work for remote visualization).

According to the documentation for NVidia driver 325.15, setting the GPU Operation Mode is only supported on  “GK110 M-class and X-class Tesla products from the Kepler family”.

Cluster2013: Racks for “Emmy” cluster delivered

On Monday, July 29, eleven new water cooled racks for our new cluster have been delivered and brought to their final position in the server room which is now quite full.

View from the entrance on one of the two rows of the new water cooled racks of the "Emmy" cluster which connect to the old "Woody" cluster in the front and which is close to the "LiMa" cluster in the back.

View from the entrance on one of the two rows of the new water cooled racks of the “Emmy” cluster which connect to the old “Woody” cluster in the front and which is close to the “LiMa” cluster in the back.

Eleven new water cooled racks of the "Emmy" cluster.

Eleven new water cooled racks of the “Emmy” cluster.

"Cool door" of the new water cooled racks of the "Emmy" cluster.

“Cool door” of the new water cooled racks of the “Emmy” cluster.

However, there is no clear date yet when the new racks will be connected to the cold water supply pipe nor when the compute hardware will be mounted. Let’s hope …

No free lunch – dying the death of parallelism

The following table gives a short overview of the main HPC systems at RRZE and the development of number of cores, clock frequency, peak performance, etc.:

original Woody (w0xxx) w10xx w11xx w12xx + w13xx LiMa Emmy Meggie
Year of installation Q1/2007 Q1/2012 Q4/2013 Q1/2016 + Q4/2016 Q4/2010 Q4/2013 Q4/2016
total number of compute nodes 222 40 72 8 + 56 500 560 728
total number of cores 888 160 288 32 + 224 6000 11200 14560
double precision peak performance of the complete system 10 TFlop/s 4.5 TFlop/s 15 TFlop/s 1.8 + 12.5 TFlop/s 63 TFlop/s 197 TFlop/s ~0.5 PFlop/s (assuming the non-AVX base frequency as AVX turbo frequency)
increase of peak performance of the complete system compared to Woody 1.0 0.4 0.8 0.2 + 1.25 6.3 20 50
max. power consumption of compute nodes and interconnect 100 kW 7 kW 10 kW 2 + 6,5 kW 200 kW 225 kW ~200 kW
Intel CPU generation Woodcrest SandyBridge (E3-1280) Haswell (E3-1240 v3) Skylake (E3-1240 v5) Westmere-EP (X5650) IvyBridge-EP (E5-2660 v2) Intel Broadwell EP (E5-2630 v4)
base clock frequency 3.0 GHz 3.5 GHz 3.4 GHz 3.5 GHz 2.66 GHz 2.2 GHz 2.2 GHz
number of sockets per node 2 1 1 1 2 2 2
number of (physical) cores per node 4 4 4 4 12 20 20
SIMD vector lenth 128 bit (SSE) 256 bit (AVX) 256 bit (AVX+FMA) 256 bit (AVX+FMA) 128 bit (SSE) 256 bit (AVX) 256 bit (AVX+FMA)
maximum single precision peak performance per node 96 GFlop/s 224 GFlop/s 435 GFlop/s 448 GFlop/s 255 GFlop/s 704 GFlop/s 1408 GFlop/s
peak performance per node compared to Woody 1.0 2.3 4.5 4.5 2.6 7.3 14.7
single precision peak performance of serial, non-vectorized code 6 GFlop/s 7.0 GFlop/s 6.8 GFlop/s 7.0 GFlop/s 5.3 GFlop/s 4.4 GFlop/s 4.4 GFlop/s
performance for unoptimized serial code compared to Woody 1.0 1.17 1.13 1.17 0.88 0.73 0.73
main memory per node 8 GB 8 GB 8 GB 16 GB / 32 GB 24 GB 64 GB 64 GB
memory bandwidth per node 6.4 GB/s 20 GB/s 20 GB/s 25 GB/s 40 GB/s 80 GB/s 100 GB/s
memory bandwidth compared to Woody 1.0 3.1 3.1 3.9 6.2 13 15.6

If one only looks at the increase of peak performance of the complete Emmy systems only, the world is bright: 20x increase in 6 years. Not bad.

However, if one has an unoptimized (i.e. non-vectorized) serial code which is compute bound, the speed on the latest system coming in 2013 will only be 73% of the one bought in 2007! The unoptimized (i.e. non-vectorized) serial code can neither benefit from the wider SIMD units nor from the increased number of cores per node but suffers from the decreased clock frequency.

But also optimized parallel codes are challenged: the degree of parallelism increased from Woody to Emmy by a factor of 25. You remember Amdahl’s law (link to Wikipedia) for strong scaling? To scale up to 20 parallel processes, it is enough if 95% of the runtime can be executed in parallel, i.e. 5% can remain non-parallel. To scale up to 11200 processes, less then 0.01% must be executed non-parallel and there must be no other overhead, e.g. due to communication or HALO exchange!

Why cfx5solve from Ansys-13.0 fails on SuSE SLES11SP2 …

Recently, the operating system of one of RRZE’s HPC clusters was upgraded from SuSE SLES10 SP4 to SuSE SLES11 SP2 … one of the few things which broke due to the OS upgrade is Ansys/CFX-13.0. cfx5solve now aborts with

ccl2flow: * command language error *
Message: getChildList: unable to find the requested path
Context: returned by cclApi call

As one can expect, Ansys does not support running Ansys-13.0 on SuSE SLES11 or SLES11 SP2. There are also lots of reports on this error for different unsupported OS versions in the CFX forum at cfd-online but no explanations or workarounds yet.

So, where does the problem come from? A long story starts …

First guess: SuSE SLES11 SP2 runs a 3.0 kernel. Thus, there might be some script which does not correctly parse the uname or so. However, the problem persists if cfx5solve is run using uname26 (or the equivalent long setarch variant). On the other hand, the problem does not occur if e.g. a CentOS-5 chroot is started on the SLES11 SP2 kernel, i.e. still the same kernel but old user space. This clearly indicates that it is no kernel issue but some library or tool problem.

Next guess: Perl comes bundled with Ansys/CFX but it might be some other command line tool from the Linux distribution which is used by cfx5solve, e.g. sed and friends or some changed bash behavior. Using strace on cfx5solve reveals several calls of such tools. But actually, none of them is problematic.

Thus, it must be a library issue: Ansys/CFX comes with most of the libraries it needs bundled but there is always the glibc, i.e. /lib64/ld-linux-x86-64.so.2, /lib64/libc.so.6, etc. SuSE SLES10 used glibc-2.4, RHEL5 uses glibc-2.5 but SLES11 SP2 uses glibc-2.11.3

The glibc cannot be overwritten using LD_LIBRARY_PATH as any another library. But there are ways to do it anyway …

The error message suggests that ccl2flow.exe is causing the problems. So, let’s run that with an old glibc version. As cfx5solve allows specifying a custom ccl2flow binary we can use a simple shell script to call the actual ccl2flow.exe using the loader and glibc libraries from the CentOS5 glibc-2.5. Nothing changes; still the very same getChildList error message in the out file. Does that mean that ccl2flow.exe is not the bad guy?

Interlude: Let’s see how ccl2flow.exe is called. The shell wrapper for ccl2flow was already there, thus, let’s add some echo statements to the command line arguments and a sleep statement to inspect the working directory. Et vola. On a good system, a quite long ccl file has just been created before ccl2flow is called; however, on a bad system running the new OS the ccl file is almost empty. Thus, we should not blame ccl2flow.exe but what happens before. Well, before there is just the Ansys supplied perl running.

Let’s have a closer look at the perl script: Understanding what the cfx5solve Perl script does seems to be impossible. Even if the Perl script is traced on a good and bad system there are no real insights. At some point, the bad system does not return an object while the other does. Thus, let’s run perl using the old glibc version. That’s a little bit more tricky as cfx5solve is not a binary but a shell script which calls another shell script before finally calling an Ansys-supplied perl binary. But one can also manage these additional difficulties. Et vola, the error message disappeared. What’s going on? Perl is running fine but producing different results depending on the glibc version.

Interlude Ansys/CFX-14.0: This version if officially only supported on SuSE SLES11 but not SLES11 SP2 if I got it correctly. But it runs fine on SLES11 SP2, too. What Perl version do they use? Exactly the same version, even the very same binary (i.e. both binaries have the same checksum!). Thus, it is not the Perl itself but some CFX-specific library it dynamically loads.

End of the story? Not yet but Almost. Spending already so much time on the problem I finally wanted to know which glibc versions are good or evil. I already knew Redhat’s glibc-2.5 is good and SuSE’s glibc-2.11.3 is evil. Thus, let’s try the versions in between using the official sources from ftp.gnu.org/gnu/glibc. Versions <2.10 or so require a fix for the configure script to recognice a modern as or ld as good version. A few versions do not compile properly at all on my system. But there is no bad version, even with 2.11.3 there is no CFX error. Only starting from glibc-2.12.1 on there is the well-known ccl2flow error. Not really surprising. SuSE and other Linux distributors have long lists of patches they apply, including back-ports from newer releases. There are almost 100 SuSE patches included in their version of glibc-2.11.3-17.39.1; no chance to see what they are doing.

My next guess is that the problem must be a commit between 2.11.3 and 2.12.1 of the official glibc versions. GNU proves a Git repository and git bisect is your friend. This leads to commit f89d2f30 from Dec. 2009: Enable multiarch whenever possible. This commit did not change any actual code but only the default configuration parameters. That means, the code causing the fault must be in the sources already much before. It only debuted once multi-arch was switched on in 2.12.1 of the vanilla version or earlier in the SuSE version (the spec file contains an --enable-multi-arch line; proved).

Going back in history, it finally turns out that glibc commit ab6a873f from Jun 2009 (SSSE3 strcpy/stpcpy for x86-64) is responsible for the problems leading to the failing ccl2flow.

Unfortunately, it is not possible to see if the most recent glibc versions still cause problems as cfx5solve already aborts earlier with some error message (Can’t call method “numstring” on an undefined value).

It is also not clear whether it is a glibc error, a problem in one of the CFX library or if it just because of the tools used when Ansys-13.0 was compiled.

End of the story: If you a willing to take the risk of getting wrong results, you may make v130/CFX/tools/perl-5.8.0-1/bin/Linux-x86_64/perl use an older glibc version (or one compiled without multi-arch support) and thus avoid the ccl2flow error. But who knows what else fails visibly or behind the scene. There is a unknown risk of wrong results even if cfx5solve now runs in principle on SuSE SLES11 SP2.

I fully understand that users do not want to switch versions within a running project. Thus, it is really a pity that ISVs force users (and sys admins) to run very old OS versions. SuSE SLES 10 was released in 2006 and will reach end of general support in July 2013; SLES11 was released in March 2009 while Ansys13 was released only in autumn 2010. And we still shall stick to SLES10? It’s time to increase the pressure on ISVs or to start developing in-house codes again.

Additional throughput nodes added to Woody cluster

Recently, 40 additional nodes with an aggregated AVX-Linpack performance of 4 TFlop/s have been added to RRZE’s Woody cluster. The nodes were bought by RRZE and ECAP and shall provide additional resources especially for sequential and single-node throughput calculations. Each node has a single-socket socket with Intel’s latest “SandyBridge” 4-core CPUs (Xeon E3-1200 series), 8 GB of main memory, currently no harddisk (and thus no swap) and GBit ethernet.

Current status: most of the new nodes are available for general batch processing; the configuration and software environment stabilized

Open problems:

  • no known ones

User visible changes and solved problems:

  • End of April 2012: all the new w10xx nodes got their harddisk in the meantime and have been reinstalled with SLES10 to match the old w0xx nodes
  • The module command was not available in PBS batch jobs; fixed since 2011-12-17 by patching /etc/profile to always source bashrc even in non-interactive shells
  • The environment variable $HOSTNAME was not defined. Fixed since 2011-12-19 via csh.cshrc.local and bash.bashrc.local.
  • SMT disabled on all nodes (since 2011-12-19). All visible cores are physical cores.
  • qsub is now generally wrapped – but that that should be completely transparent for users (2012-01-16).
  • /wsfs = $FASTTMP is now available, too (2012-01-23)

Configuration notes:

  • The additional nodes are named w10xx.
  • The base operating system is Ubuntu 10.04 LTS. SuSE SLES10 as on the rest of Woody.
    • The diskless images provisioned using Perceus. Autoinstall + cfengine
    • This is different to the rest of Woody which has stateful SuSE SLES10SP4.
    • However, Tiny* for example also uses Ubuntu 10.04 (but in a stateful installation) and binaries should run on SLES and Ubuntu without recompilation.
  • The w10xx nodes have python-2.6 while the other w0xxx nodes have python-2.4. You can load the python/2.7.1 module to ensure a common Python environment.
  • compilation of C++ code on the compute nodes using one of RRZE’s gcc modules will probably fail; however, we never guaranteed that compiling on any compute nodes works; either use the system g++, compile on the frontend nodes, or …
  • The PBS daemon (pbs_mom) running on the additional nodes is much newer than on the old Woody nodes (2.5.9 v.s. 2.3.x?); but the difference should not be visible for users.
  • Each PBS job runs in a cpuset. Thus, you only have access to the CPUs assigned to you by the queuing system. Memory, however, is not partitioned. Thus, make sure that you only use less than 2 GB per requested core as memory constraints cannot be imposed.
  • As the w10xx nodes currently do not have any local harddisk, they are also operated without swap. Thus, the virtual address space and the physically allocated memory of all processes must not exceed 7.2 GB in total. Also /tmp and /scratch are part of the main memory. Stdout and stderr of PBS jobs are also first spooled to main memory before they are copied to the final destination after the job ended.
  • multi-node jobs are not supported as the nodes are a throughput component

Queue configuration / how to submit jobs:

  • The old w0xx nodes got the properties :c2 (as they are Intel Core2-based) and :any.
    The addition w10xx nodes got the properties :sb (as they are Intel SandyBridge-based) and :any.
  • Multi-node jobs (-lnodes=X:ppn=4 or -lnodes=X:ppn=4:c2 with X>1) are only eligible for the old w0xx nodes. :c2 will be added automatically if not present.
    Multi-node jobs which ask for :sb or :any are rejected.
  • Single-node jobs (-lnodes=1:ppn=4) by default also will only access the old w0xx nodes, i.e. :c2 will be added automatically if no node property is given. Thus, -lnodes=1:ppn=4 is identical to requesting -lnodes=1:ppn=4:c2.
    Single-node jobs which specify :sb (i.e. -lnodes=1:ppn=4:sb) will only go to the new w10xx nodes.
    Jobs with :any (i.e. -lnodes=1:ppn=4:any) will run on any available node.
  • Single-core jobs (-lnodes=1:ppn=Y:sb with Y<4, i.e. requesting less than a complete node) are only supported on the new w10xx nodes. Specifying :sb is mandatory.

Technical details:

  • PBS routing originally did not work as expected for jobs where the resource requests are given on the command line (e.g. qsub -lnodes=1:ppn=4 job.sh caused trouble).
    Some technical background: (1) the torque-submitfilter cannot modify the resource requests given on the command line and (2) routing queues cannot add node properties to resource requests any more, thus, for this type of job routing to the old nodes does not seem to be possible … Using distinct queues for the old and new nodes has the disadvantage that jobs cannot ask for “any available CPU”. Moreover, the maui scheduler does not support multi-dimensional throttling policies, i.e. has problems if one user submits jobs to different queues at the same time.
    The solution probably is a wrapper around qsub as suggested in the Torque mailinglist back in May 2008. At RRZE we already use qsub-wrappers for e.g. qsub.tinyblue. Duplicating some of the logic of the submit filter into the submit wrapper is not really elegant but seems to be the only solution right now. (As a side node: interactive jobs do not seem to suffer from the problem as there is special handling in the qsub source code which writes the command line arguments to a temporary file which is subject to processing by the submit filter.)

Getting started on LiMa

Early friendly user access was enabled on LiMa end of October 2010. The system and user environment is still in progress. Here are a few notes (“FAQs”) describing specialties of the present configuration and major changes during the early days of operation …

What are the hardware specifications

  • 2 login nodes (lima1/lima2)
    • two-sockets with Intel Westmere X5650 (2.66 GHz) processors; 12 physical cores, 24 logical cores per node
    • 48 GB DDR3 memory (plus 48 GB swap)
    • 500 GB in /scratch on a local harddisk
  • 500 compute nodes (Lrrnn)
    • two-sockets with Intel Westmere X5650 (2.66 GHz) processors; 12 physical cores, 24 logical cores per node
    • 24 GB DDR3 memory (NO swap) – roughly 22 GB available for user applications as the rest is used for the OS in the diskless operation
    • NO local harddisk
    • QDR Infiniband
  • parallel filesystem (/lxfs)
    • ca. 100 TB capacity
    • up to 3 GB/s of aggregated bandwidth
  • OS: CentOS 5.5
  • batch system: torque/maui (as also on the other RRZE systems)

Where / how should I login?

SSH to lima.rrze.uni-erlangen.de and you will end up on one of the two login nodes. As usual, these login nodes are only assessable from within the University network. A VPN-split-tunnel might not be enough to be on the University network as some of the Universities’ priviate IP addresses as e.g. used by the HPC systems are not added to the default list of networks routed through the split tunnel. In case of problems, log into cshpc.rrze.uni-erlangen.de first.

I have some specific problems on LiMa

First of all check if the issue is already described in the online version of the article. If not, contact hpc-support@rrze with as much information as possible.

I’d like to contribute some documentation

Please add a comment to this article (Log into the Blog system using your IDM account/password which is not not necessarily identical to your HPC account. All FAU students and staff should have an IDM account) or send an email to hpc@rrze.

There are almost no modules visible (Update 2010-11-25)

For now, please execute source /apps/rrze/etc/user-rrze-modules.csh (for csh/tcsh) or . /apps/rrze/etc/use-rrze-modules.sh (for bash) to initialize the RRZE environment. This command will also set some environment variables like WOODYHOME or FASTTMP.

Once the system goes into regular operation, this step will no longer be required.

2010-11-25: The login and compute nodes nodes now already have the full user environment by default.

How should I submit my jobs

Always use ppn=24 when requesting nodes as each node has 2×6 physical cores but 24 logical cores due to SMT.

$FASTTMP is empty – where are my files from the parallel filesystem on Woody?

The parallel filesystem on Woody and LiMa are different and not shared. However, $FASTTMP is used on both systems to point to the local parallel filesystem.

How can I detect in my login scripts whether I’m on Woody or on LiMa

There are many different ways to detect this; one option is to test for /wsfs/$GROUP/$USER (Woody or some of the Transtec nodes) and /lxfs/$GROUP/$USER (LiMa).

In the future, we might provide an environment variable telling you the cluster (Transtec, Woody, TinyXY, LiMa).

Should I recompile my binaries for LiMa?

Many old binaries will run on LiMa, too. However, we recommend to recompile on the LiMa frontends as many of the tools and libraries are newer on LiMa in their default version as on Woody.

How do I start MPI jobs on LiMa? (Update: 2010-11-03; 2010-12-18)

First of all, correct placement (“pinning”) of processes is much more important on LiMa (and also TinyXY) than on Woody (or the Transtec cluster) as all modern nodes as ccNUMA and you only achieve best performance if data access is into the local memory. Attend an HPC course if you do not know what ccNUMA is!

  1. Do not use the mpirun in the default $PATH if no MPI module is loaded. This hopefully will change when regular user operation starts. There is no mpirun in the default $PATH (unless you have the openmpi moule loadeed).
  2. For Intel-MPI to use an start mechanism more or less compatible to the other RRZE clusters use /apps/rrze/bin/mpirun_rrze-intelmpd -intelmpd -pin 0_1_2_3_4_5_6_7_8_9_10_11 .... In this way, you can explicitly pin all you processes as on the other RRZE clusters. However, this algorithm does not scale up to the highest process counts.
  3. An other option (currently only available on LiMa) is to use one of the official mechanisms of Intel MPI (assuming use use bash for your job script and intempi/4.0.0.028-[intel|gnu] is loaded):
    export PPN=12
    gen-hostlist.sh $PPN
    mpiexec.hydra -print-rank-map -f nodes.$PBS_JOBID [-genv I_MPI_PIN] -n XX ./a.out
    Attention: pinning does not work properly in all circumstances for this start method. See chapter 3.2 of /apps/intel/mpi/4.0.0.028/doc/Reference_Manual.pdf for more details on I_MPI_PIN and friends.
  4. An other option (currently only available on LiMa) is to use one of the official mechanisms of Intel MPI (assuming use use bash for your job script and intempi/4.0.1.007-[intel|gnu] is loaded):
    export PPN=12
    export NODES=`uniq $PBS_NODEFILE | wc -l`
    export I_MPI_PIN=enable
    mpiexec.hydra -rmk pbs -ppn $PPN -n $(( $PPN * $NODES )) -print-rank-map ./a.out
    Attention: pinning does not work properly in all circumstances for this start method. See chapter 3.2 of /apps/intel/mpi/4.0.1.0007/doc/Reference_Manual.pdf for more details on I_MPI_PIN and friends.
  5. There are of course many more possibilities to start MPI programs …

Hints for Open MPI (2010-12-18)

Starting from today, the openmpi modules on LiMa set OMPI_MCA_orte_tmpdir_base=/dev/shm as $TEMPDIR points to a directory on the LXFS parallel filesystem and, thus, Open MPI might/would show bad performance for shared-memory communication.

Pinning can be achieved for Open MPI using mpirun -npernode $ppn -bind-to-core -bycore -n $(( $ppn * $nodes )) ./a.out

PBS output files already visible while the job is running (Update: 2010-11-04; 2010-11-25)

As the compute nodes run without any local harddisk (yes, there is only RAM and nothing else locally on the compute nodes to store things), we are experimenting with a special PBS/MOM setting which writes the PBS output files (*.[o|e]$PBS_JOBID or what you specified using PBS -[o|e]) directly to the final destination. Please do not rename/delete these files while the job is running and do not be surprised that you see the files while the job is running.

The special settings are: $spool_as_final_name and $nospool_dir_list in the mom_config. I’m not sure if we will keep these settings in the final configuration. They save space in the RAM of the compute node but there are also administrative disadvantages …

2010-11-25: do not use #PBS -o filename or #PBS -e filename as PBS may cause trouble if the file already exists. Without the -o/-e PBS will generate files based on the script name or #PBS -N name and append .[o|e]$PBS_JOBID.

[polls id=”1″]

You have to login into the Blog system using your IDM account to be able to vote!

Where should I store intermediate files?

The best is to avoid intermediate files or small but frequent file IO. There is no local harddisk. /tmp is part of the main memory! Please consult hpc-support@rrze to assist analyzing your IO requirements.

Large files which are read/written in large blocks should be put to $FASTTMP. Remember: as on Woody there is no backup on $FASTTMP. There are currently also no quotas – but we will probably implement high-water-mark deletion as on Woody.

Small files probably should be put on $WOODYHOME.

File you want to keep for long time should be moved to /home/vault/$GROUP/$USER. The data access rate to /home/vault currently is limited on LiMa. Please use with care.

/tmp, $TMPDIR and $SCRATCH (2010-11-20 / update 2010-11-25)

As the nodes are diskless, /tmpis part of a ramdisk and does not provide much temporary scratch space. As of 2010-11-20, a new environment variable $SCRATCH is defined by use-rrze-module.csh/sh which points to a node-specific directory on the parallel filesystem. The PBS configuration is not yet adapted, i.e. $TMPDIR within a job still points to a job specific directory within the tiny /tmp directory.

2010-11-25: /scratch ($SCATCH) is a node-specific directory on LXFS. (At least once the compute nodes are rebooted.)

2010-11-25: $TMPDIR now points to a job-private directory within $SCRATCH, i.e. is on LXFS; all data in $TMPDIR will be deleted at the end of the job. (At least once the compute nodes are rebooted.)

/tmp is still small and part of the node’s RAM.

mpirun from my commercial code aborts during startup with connection refused

On the LiMa nodes there are currently true rsh binaries installed. Make sure that the MPI implementation does not try to start remote processes using rsh as there are no RSH daemons running for security reasons as RSH is not installed and there are also no symlinks from rsh to ssh as on the other RRZE systems. Enforce the usage of SSH. The rsh binaries probably will be uninstalled before regular user operation and replaced by links to the ssh binary (as on most of the other RRZE clusters).

Update 2010-11-05: rsh and rsh-server have been uninstalled. There are however no links from rsh to ssh.

There is obviously some software installed in /opt (e.g. /opt/intel/ and /opt/openmpi)

Do not use any software from /opt. All these installations will be removed before regular user operation starts. RRZE-rpvoded software is in /apps and in almost all cases accessible through modules. /apps is not (and will not be) shared between LiMa and Woody.

My application tells that it could not allocate the required memory (Update 2010-11-12)

memory overcommit is limited on LiMa. Thus, not only the resident (i.e. actually) used memory is relevant but also the virtual (i.e. total) memory which sometimes is significantly higher. Complain to the application developer. There is currently no real work around on LiMa.
As we are still experimenting with the optimal values of the overcommit limitation, there might be temporal chances (including times when overcommitment is not limited).

Memory issues might also come from an inappropriate stacksize line in ~/.cshrc (or ~/.bashrc). Try to remove any stacksize line in your login scripts.

IO performance to $FASTTMP (/lxfs/GROUP/USER) seems to be very low (2010-11-03/2010-11-12)

The default striping is not optional yet; it uses only one OST, thus, performance is limited by roughly 100 MB/s. Use lfs setstripe --count -1 --size 128m DIRECTORY to manually activate striping over 16 OSTs. 2010-11-25: RRZE will activate reasonable file striping, thus, it should not be necessary for normal users to set striping manually. Modified striping only affects newly created files/subdirectories.

The stripe size (--size argument) should match your applications’ IO patterns.

PBS commands do not work – but they used to work / Jobs are not started (2010-11-03)

The PBS server currently has a severe bug: If a jobs requests too much memory and thus crashed the master node of the job, the PBS server stalls for quite long time (several hours) and does not respond at all to any requests (although its running on a different server). This may lead to hanging user commands or error messages like pbs_iff: cannot read reply from pbs_server. No permission or cannot connect to server ladm1 or Unauthorized Request. And of course, while the PBS server process hangs, no new jobs are started.

If you are interested in the technical details: look at the bugzilla entry at
http://www.clusterresources.com/bugzilla/show_bug.cgi?id=85

mpiexec.hydra and LD_LIBRARY_PATH (2010-11-08)

It currently looks like mpiexec.hydra from Intel-MPI 4.0.0.x does not respect LD_LIBRARY_PATH settings. Further investigations are currently carried out. Right now it looks like an issue with the NEC login scripts on the compute nodes which overwrite the LD_LIBRARY_PATH setting.

Suggested workaround for now: mpiexec.hydra ... env LD_LIBRARY_PATH=$LD_LIBRARY_PATH ./a.out.

Other MPI start mechanisms might be affected, too.

2010-11-25: it’s not clear if the work around is still required as mpi-selector/Oscar-modules/env-switcher have been uninstalled in the mean time.

MPI-IO and Intel MPI 4.0.1.007 (2010-12-09)

Probably not only a LiMa issue but also relevant on LiMa. MPI-IO to $FASTTMP with Intel MPI 4.0.1.007 (4.0up1) fails with “File locking failed” unless the following environment variables are set: I_MPI_EXTRA_FILESYSTEM=on I_MPI_EXTRA_FILESYSTEM_LIST=lustre. Intel MPI up to 4.0.0.028 worked fine without and with these variables set.

I_MPI_EXTRA_FILESYSTEM=on I_MPI_EXTRA_FILESYSTEM_LIST=lustre get set by the intelmpi/4.0.1.007-* module (since 2011-02-02)

STAR-CCM+ (2010-11-12)

Here is a (partially tested) sample job script:

[bash]
#!/bin/bash -l
#PBS -l nodes=4:ppn=24
#PBS -l walltime=00:25:00
#PBS -N simXYZ
#PBS -j eo

# star-ccm+ arguments
CCMARGS=”-load simxyz.sim”

# specify the time you want to have to save results, etc.
TIME4SAVE=1200

# number of cores to use per node
PPN=12

# some MPI options; explicit pinning – must match the PPN line
MPIRUN_OPTIONS=”-cpu_bind=v,map_cpu:0,1,2,3,4,5,6,7,8,9,10,11″

### normally, no changes should be required below ###

module add star-ccm+/5.06.007

echo

# count the number of nodes
NODES=`uniq ${PBS_NODEFILE} | wc -l`
# calculate the number of cores actually used
CORES=$(( ${NODES} * ${PPN} ))

# check if enough licenses should be available
/apps/rrze/bin/check_lic.sh -c ${CDLMD_LICENSE_FILE} hpcdomains $(($CORES -1)) ccmpsuite 1
. /apps/rrze/bin/check_autorequeue.sh

# change to working directory
cd ${PBS_O_WORKDIR}

# generate new node file
for node in `uniq ${PBS_NODEFILE}`; do
echo “${node}:${PPN}”
done > pbs_nodefile.${PBS_JOBID}

# some exit/error traps for cleanup
trap ‘echo; echo “*** Signal TERM received: `date`”; echo; rm pbs_nodefile.${PBS_JOBID}; exit’ TERM
trap ‘echo; echo “*** Signal KILL received: `date`”; echo; rm pbs_nodefile.${PBS_JOBID}; exit’ KILL

# automatically detect how much time this batch job requested and adjust the
# sleep accordingly
export TIME4SAVE
( sleep ` qstat -f ${PBS_JOBID} | awk -v t=${TIME4SAVE} \
‘{if ( $0 ~ /Resource_List.walltime/ ) \
{ split($3,duration,”:”); \
print duration[1]*3600+duration[2]*60+duration[3]-t }}’ ` && \
touch ABORT ) >& /dev/null &
SLEEP_ID=$!

# start STAR-CCM+
starccm+ -batch -rsh ssh -mppflags “$MPIRUN_OPTIONS” -np ${CORES} -machinefile pbs_nodefile.${PBS_JOBID} ${CCMARGS}

# final clean up
rm pbs_nodefile.${PBS_JOBID}
pkill -P ${SLEEP_ID}
[/bash]

Solved issues

  • cmake – (2010-10-26) – rebuilt from sources; should work now without dirty LD_LIBRARY_PATH settings
  • svn – (2010-10-26) – installed CentOS rpm on the login nodes; the version unfortunately is a little bit old (1.4.2); however, there is no real chance to get a newer on on LiMa. Goto cshpc to have at least 1.5.0
  • qsub from compute nodes – (2010-10-26) – should work now; added allow_node_submit=true to PBS server
  • vim – (2010-10-27) – installed CentOS rpm of vim-enhanced on the login nodes
  • autologout – (2010-10-27) – will be disabled for CSH once /apps/rrze/etc/use-rrze-modules.csh is sourced
  • xmgrace, gnuplot, (xauth) – (2010-11-02) – installed on the login nodes; version from CentOS rpm
  • clock skew on /lxfs ($FASTTEMP) is now hopefully really fixed (2010-11-04)
  • rsh and rsh-server have been uninstalled from compute/login/admin nodes (2010-11-05)
  • several usability improvements added to use-rrze-modules.csh/sh (2010-11-19)

LiMa, LIKWID und der Turbo Boost

Intel hat mit den Nehalem-Prozessoren (Xeon X55xx bzw. Core i7) den Turbo Boost-Modus eingeführt, bei dem stark vereinfacht gesagt, einzelne Prozessorkerne automatisch höher takten können, wenn nur Teile des gesamten Prozessorchips genutzt werden und somit “Luft” bei Stromverbrauch, Spannung und Temperatur ist. Im LiMa-Cluster sind Intel Westmere Prozessoren mit 2,66 GHz (Intel Xeon X5650) verbaut. Diese Prozessoren erlaubt prinzipiell (bis zu) zwei Turbo Boost-Stufen (+2×133 MHz), auch dann wenn alle Kerne benutzt sind, und bis zu drei Turbo Boost-Stufen (+3×133 MHz), wenn maximal zwei der sechs Kerne eines Prozessorchips in Benutzung sind (vgl. Tabelle 2 im Intel Xeon Processor 5600 Series Specification Update vom September 2010). Das heißt unter günstigen Bedingungen laufen alle Prozessorkerne auch unter Volllast mit 2,93 GHz obwohl man eigentlich nur einen 2,66 GHz-Prozessor gekauft hat.

zeitlich aufgelöste Taktfrequenz beim LINPACK-Lauf auf einem guten Knoten

Dass annähernd zwei volle Turbo Boost-Stufen auch unter Vollast möglich sind, zeigt nebenstehende Grafik. Hierbei wurde mit einer Auflösung von 5 Sekunden die Taktfrequenz aller physikalischen Kerne im Knoten mittels LIKWID gemessen, während auf dem Knoten die multi-threaded LINPACK-Version aus Intel’s MKL lief. Bevor die LINPACK-Prozesse anlaufen, haben die Prozessorkerne aufgrund der ondemand-Frequenzeinstellung im Linux-Betriebssystem heruntergetaktet. Sobald “Last” generiert wird, takten die Prozessoren hoch. Wenn nach etlichen Sekunden die Prozessoren “durchgeheizt” sind, sinkt die Taktfrequenz nur leicht von 2,93 GHz auf rund 2,90 GHz. Am Ende des bzw. kurz nach dem LINPACK-Lauf takten die Prozessoren kurzfristig nochmals hoch, da zum einen die Last geringer geworden ist und somit die thermischen und elektrischen Grenzwerte für den Turbo Boost-Mode unterschritten sind, gleichzeitig der ondemand-Regler des Linux-Betriebssystems die Prozessoren aber noch nicht herunter getaktet hat. In der Grafik sind im wesentlichen nur zwei Kurven zu erkennen, obwohl es eigentlich 12 sind, da alle Kerne eines Sockels praktisch immer mit der gleichen Frequenz laufen.

zeitlich aufgelöste Taktfrequenz beim LINPACK-Lauf auf einem schlechten Knoten

Leider laufen jedoch nicht immer alle X5650-Prozessoren unter Volllast mit annähernd zwei Turbo Boost-Stufen, d.h. 2,93 GHz, wie die zweite Grafik zeigt. Hier takten die Rechenkerne über weite Teile des LINPACK-Laufs auf “nur” 2,7 GHz herunter, wodurch die gemessene Knotenleistung von rund 128,5 GFlop/s auf 120,5 GFlop/s sinkt — über 5% die man sicherlich auch in der einen oder anderen Form bei realen Anwendungen und nur nicht nur beim synthetischen LINPACK sieht.

Über die Ursachen der geringeren Übertaktung des zweiten Knotens kann derzeit nur spekuliert werden. Die Wärmeleitpaste zwischen den Prozessoren und den Kühlkörpern ist es jedenfalls nachweislich nicht. Ebenso ist es nicht das Netzteil oder die Position im Rack, da ein Umzug des Rechenknoten in ein anderes Enclosure in einem anderen Rack keine Besserung brachten. BIOS-Version, CMOS-Einstellung und CPU-Stepping sollten hoffentlich bei allen Knoten auch gleich sein. Dass zwei Prozessoren aus Hunderten eine Macke haben, mag ja durchaus sein, aber wie wahrscheinlich ist es, dass genau diese zwei Prozessoren dann auch noch im gleichen Rechner verbaut werden … Als wahrscheinlichste Ursache würde ich daher im Moment “Toleranzen” bei den Mainboards vermuten, die sich negativ auswirken. Aber Details wird NEC sicherlich im eigenen und unseren Interesse noch herausfinden …

Performance-Messtools wie LIKWID zahlen sich auf jeden Fall auch für die Abnahme von HPC-Clustern aus.

Hier nochmal sinngemäß die Befehle, die ich zur Messung verwendet habe: (LIKWID 2.0 ist dabei aufgrund des Daemon-Modus mindestens nötig und /dev/cpu/*/msr muss durch den aufrufenden User les- und schreibbar sein):

[shell]
/opt/likwid/2.0/bin/likwid-perfctr -c 0-11 -g CLOCK -d 5 | tee /tmp/clock-speed-`hostname`-`date +%Y%m%d-%H%M`.log > /dev/null &
sleep 15
env OMP_NUM_THREADS=12 taskset -c 0-11 /opt/intel/Compiler/11.1/073/mkl/benchmarks/linpack/xlinpack_xeon64 lininput_xeon64-50k
sleep 15
kill $! >& /dev/null
[/shell]

Da die Taktfrequenz eine abgeleitete Größe ist, kann es vorkommen, dass einzelne CPUs nan als Taktfrequenz liefern, wenn keine Instruktionen ausgeführt werden. Aber das ist natürlich nur dann der Fall, wenn auch kein Programm auf dem Prozessorkern läuft.

LiMa, Kühlschränke und Toster

Unser neuer HPC-Cluster hat geschlossene Racks mit Kühleinheiten dazwischen (vgl. https://blogs.fau.de/zeiser/2010/09/21/rechnerzuwachs-und-generationswechsel-bei-den-hpc-clustern-am-rrze/). Auf die Mail meines Kollegen mit der Bitte um DNS-Einträge für die Kühlschränke kam als Antwort …wenn Ihr bei den Toastern angelangt seid, nehmt ihr aber IPv6. Die Rechenknoten sind zwar so etwas ähnliches wie Toster, haben aber trotzdem IPv4-Adressen.

Aktueller LiMa-Status:

Zurück zu den ernsten Dingen: der mechanische Aufbau und die Verkabelung des Clusters sind praktisch abgeschlossen und am vergangenen Donnerstag (30.9.2010) wurden erstmals alle Knoten des neuen Clusters eingeschaltet. Es geht also voran ….

Ein paar weitere Impressionen vom Aufbau:

Verlegung der Rohre im Doppelboden


Kaltwasserrohre mit Isolierung im Doppelboden. Man beachte das Verhältnis von Wasserrohrdurchmesser und Dicke der Stützen des Doppelbodens!

Zwischenlagerung der Rechenknoten und Kabel


Rohre und Infinibandkabel im Doppelboden

Vorderseite des 324-Port Infiniband-Switches mit 12x Kabeln zu weiteren Leave-Switches


Rückseite des Infiniband-Switches - gut 300 Kupferkabel und einige optische Infiniband-Kabel

Auschnitt der LiMa-Rack-Reihe


TinyFat-Rack und eine Kühleinheit