Thomas Zeiser

Some comments by Thomas Zeiser about HPC@RRZE and other things

Content

Getting started on LiMa

Early friendly user access was enabled on LiMa end of October 2010. The system and user environment is still in progress. Here are a few notes (“FAQs”) describing specialties of the present configuration and major changes during the early days of operation …

What are the hardware specifications

  • 2 login nodes (lima1/lima2)
    • two-sockets with Intel Westmere X5650 (2.66 GHz) processors; 12 physical cores, 24 logical cores per node
    • 48 GB DDR3 memory (plus 48 GB swap)
    • 500 GB in /scratch on a local harddisk
  • 500 compute nodes (Lrrnn)
    • two-sockets with Intel Westmere X5650 (2.66 GHz) processors; 12 physical cores, 24 logical cores per node
    • 24 GB DDR3 memory (NO swap) – roughly 22 GB available for user applications as the rest is used for the OS in the diskless operation
    • NO local harddisk
    • QDR Infiniband
  • parallel filesystem (/lxfs)
    • ca. 100 TB capacity
    • up to 3 GB/s of aggregated bandwidth
  • OS: CentOS 5.5
  • batch system: torque/maui (as also on the other RRZE systems)

Where / how should I login?

SSH to lima.rrze.uni-erlangen.de and you will end up on one of the two login nodes. As usual, these login nodes are only assessable from within the University network. A VPN-split-tunnel might not be enough to be on the University network as some of the Universities’ priviate IP addresses as e.g. used by the HPC systems are not added to the default list of networks routed through the split tunnel. In case of problems, log into cshpc.rrze.uni-erlangen.de first.

I have some specific problems on LiMa

First of all check if the issue is already described in the online version of the article. If not, contact hpc-support@rrze with as much information as possible.

I’d like to contribute some documentation

Please add a comment to this article (Log into the Blog system using your IDM account/password which is not not necessarily identical to your HPC account. All FAU students and staff should have an IDM account) or send an email to hpc@rrze.

There are almost no modules visible (Update 2010-11-25)

For now, please execute source /apps/rrze/etc/user-rrze-modules.csh (for csh/tcsh) or . /apps/rrze/etc/use-rrze-modules.sh (for bash) to initialize the RRZE environment. This command will also set some environment variables like WOODYHOME or FASTTMP.

Once the system goes into regular operation, this step will no longer be required.

2010-11-25: The login and compute nodes nodes now already have the full user environment by default.

How should I submit my jobs

Always use ppn=24 when requesting nodes as each node has 2×6 physical cores but 24 logical cores due to SMT.

$FASTTMP is empty – where are my files from the parallel filesystem on Woody?

The parallel filesystem on Woody and LiMa are different and not shared. However, $FASTTMP is used on both systems to point to the local parallel filesystem.

How can I detect in my login scripts whether I’m on Woody or on LiMa

There are many different ways to detect this; one option is to test for /wsfs/$GROUP/$USER (Woody or some of the Transtec nodes) and /lxfs/$GROUP/$USER (LiMa).

In the future, we might provide an environment variable telling you the cluster (Transtec, Woody, TinyXY, LiMa).

Should I recompile my binaries for LiMa?

Many old binaries will run on LiMa, too. However, we recommend to recompile on the LiMa frontends as many of the tools and libraries are newer on LiMa in their default version as on Woody.

How do I start MPI jobs on LiMa? (Update: 2010-11-03; 2010-12-18)

First of all, correct placement (“pinning”) of processes is much more important on LiMa (and also TinyXY) than on Woody (or the Transtec cluster) as all modern nodes as ccNUMA and you only achieve best performance if data access is into the local memory. Attend an HPC course if you do not know what ccNUMA is!

  1. Do not use the mpirun in the default $PATH if no MPI module is loaded. This hopefully will change when regular user operation starts. There is no mpirun in the default $PATH (unless you have the openmpi moule loadeed).
  2. For Intel-MPI to use an start mechanism more or less compatible to the other RRZE clusters use /apps/rrze/bin/mpirun_rrze-intelmpd -intelmpd -pin 0_1_2_3_4_5_6_7_8_9_10_11 .... In this way, you can explicitly pin all you processes as on the other RRZE clusters. However, this algorithm does not scale up to the highest process counts.
  3. An other option (currently only available on LiMa) is to use one of the official mechanisms of Intel MPI (assuming use use bash for your job script and intempi/4.0.0.028-[intel|gnu] is loaded):
    export PPN=12
    gen-hostlist.sh $PPN
    mpiexec.hydra -print-rank-map -f nodes.$PBS_JOBID [-genv I_MPI_PIN] -n XX ./a.out
    Attention: pinning does not work properly in all circumstances for this start method. See chapter 3.2 of /apps/intel/mpi/4.0.0.028/doc/Reference_Manual.pdf for more details on I_MPI_PIN and friends.
  4. An other option (currently only available on LiMa) is to use one of the official mechanisms of Intel MPI (assuming use use bash for your job script and intempi/4.0.1.007-[intel|gnu] is loaded):
    export PPN=12
    export NODES=`uniq $PBS_NODEFILE | wc -l`
    export I_MPI_PIN=enable
    mpiexec.hydra -rmk pbs -ppn $PPN -n $(( $PPN * $NODES )) -print-rank-map ./a.out
    Attention: pinning does not work properly in all circumstances for this start method. See chapter 3.2 of /apps/intel/mpi/4.0.1.0007/doc/Reference_Manual.pdf for more details on I_MPI_PIN and friends.
  5. There are of course many more possibilities to start MPI programs …

Hints for Open MPI (2010-12-18)

Starting from today, the openmpi modules on LiMa set OMPI_MCA_orte_tmpdir_base=/dev/shm as $TEMPDIR points to a directory on the LXFS parallel filesystem and, thus, Open MPI might/would show bad performance for shared-memory communication.

Pinning can be achieved for Open MPI using mpirun -npernode $ppn -bind-to-core -bycore -n $(( $ppn * $nodes )) ./a.out

PBS output files already visible while the job is running (Update: 2010-11-04; 2010-11-25)

As the compute nodes run without any local harddisk (yes, there is only RAM and nothing else locally on the compute nodes to store things), we are experimenting with a special PBS/MOM setting which writes the PBS output files (*.[o|e]$PBS_JOBID or what you specified using PBS -[o|e]) directly to the final destination. Please do not rename/delete these files while the job is running and do not be surprised that you see the files while the job is running.

The special settings are: $spool_as_final_name and $nospool_dir_list in the mom_config. I’m not sure if we will keep these settings in the final configuration. They save space in the RAM of the compute node but there are also administrative disadvantages …

2010-11-25: do not use #PBS -o filename or #PBS -e filename as PBS may cause trouble if the file already exists. Without the -o/-e PBS will generate files based on the script name or #PBS -N name and append .[o|e]$PBS_JOBID.

[polls id=”1″]

You have to login into the Blog system using your IDM account to be able to vote!

Where should I store intermediate files?

The best is to avoid intermediate files or small but frequent file IO. There is no local harddisk. /tmp is part of the main memory! Please consult hpc-support@rrze to assist analyzing your IO requirements.

Large files which are read/written in large blocks should be put to $FASTTMP. Remember: as on Woody there is no backup on $FASTTMP. There are currently also no quotas – but we will probably implement high-water-mark deletion as on Woody.

Small files probably should be put on $WOODYHOME.

File you want to keep for long time should be moved to /home/vault/$GROUP/$USER. The data access rate to /home/vault currently is limited on LiMa. Please use with care.

/tmp, $TMPDIR and $SCRATCH (2010-11-20 / update 2010-11-25)

As the nodes are diskless, /tmpis part of a ramdisk and does not provide much temporary scratch space. As of 2010-11-20, a new environment variable $SCRATCH is defined by use-rrze-module.csh/sh which points to a node-specific directory on the parallel filesystem. The PBS configuration is not yet adapted, i.e. $TMPDIR within a job still points to a job specific directory within the tiny /tmp directory.

2010-11-25: /scratch ($SCATCH) is a node-specific directory on LXFS. (At least once the compute nodes are rebooted.)

2010-11-25: $TMPDIR now points to a job-private directory within $SCRATCH, i.e. is on LXFS; all data in $TMPDIR will be deleted at the end of the job. (At least once the compute nodes are rebooted.)

/tmp is still small and part of the node’s RAM.

mpirun from my commercial code aborts during startup with connection refused

On the LiMa nodes there are currently true rsh binaries installed. Make sure that the MPI implementation does not try to start remote processes using rsh as there are no RSH daemons running for security reasons as RSH is not installed and there are also no symlinks from rsh to ssh as on the other RRZE systems. Enforce the usage of SSH. The rsh binaries probably will be uninstalled before regular user operation and replaced by links to the ssh binary (as on most of the other RRZE clusters).

Update 2010-11-05: rsh and rsh-server have been uninstalled. There are however no links from rsh to ssh.

There is obviously some software installed in /opt (e.g. /opt/intel/ and /opt/openmpi)

Do not use any software from /opt. All these installations will be removed before regular user operation starts. RRZE-rpvoded software is in /apps and in almost all cases accessible through modules. /apps is not (and will not be) shared between LiMa and Woody.

My application tells that it could not allocate the required memory (Update 2010-11-12)

memory overcommit is limited on LiMa. Thus, not only the resident (i.e. actually) used memory is relevant but also the virtual (i.e. total) memory which sometimes is significantly higher. Complain to the application developer. There is currently no real work around on LiMa.
As we are still experimenting with the optimal values of the overcommit limitation, there might be temporal chances (including times when overcommitment is not limited).

Memory issues might also come from an inappropriate stacksize line in ~/.cshrc (or ~/.bashrc). Try to remove any stacksize line in your login scripts.

IO performance to $FASTTMP (/lxfs/GROUP/USER) seems to be very low (2010-11-03/2010-11-12)

The default striping is not optional yet; it uses only one OST, thus, performance is limited by roughly 100 MB/s. Use lfs setstripe --count -1 --size 128m DIRECTORY to manually activate striping over 16 OSTs. 2010-11-25: RRZE will activate reasonable file striping, thus, it should not be necessary for normal users to set striping manually. Modified striping only affects newly created files/subdirectories.

The stripe size (--size argument) should match your applications’ IO patterns.

PBS commands do not work – but they used to work / Jobs are not started (2010-11-03)

The PBS server currently has a severe bug: If a jobs requests too much memory and thus crashed the master node of the job, the PBS server stalls for quite long time (several hours) and does not respond at all to any requests (although its running on a different server). This may lead to hanging user commands or error messages like pbs_iff: cannot read reply from pbs_server. No permission or cannot connect to server ladm1 or Unauthorized Request. And of course, while the PBS server process hangs, no new jobs are started.

If you are interested in the technical details: look at the bugzilla entry at
http://www.clusterresources.com/bugzilla/show_bug.cgi?id=85

mpiexec.hydra and LD_LIBRARY_PATH (2010-11-08)

It currently looks like mpiexec.hydra from Intel-MPI 4.0.0.x does not respect LD_LIBRARY_PATH settings. Further investigations are currently carried out. Right now it looks like an issue with the NEC login scripts on the compute nodes which overwrite the LD_LIBRARY_PATH setting.

Suggested workaround for now: mpiexec.hydra ... env LD_LIBRARY_PATH=$LD_LIBRARY_PATH ./a.out.

Other MPI start mechanisms might be affected, too.

2010-11-25: it’s not clear if the work around is still required as mpi-selector/Oscar-modules/env-switcher have been uninstalled in the mean time.

MPI-IO and Intel MPI 4.0.1.007 (2010-12-09)

Probably not only a LiMa issue but also relevant on LiMa. MPI-IO to $FASTTMP with Intel MPI 4.0.1.007 (4.0up1) fails with “File locking failed” unless the following environment variables are set: I_MPI_EXTRA_FILESYSTEM=on I_MPI_EXTRA_FILESYSTEM_LIST=lustre. Intel MPI up to 4.0.0.028 worked fine without and with these variables set.

I_MPI_EXTRA_FILESYSTEM=on I_MPI_EXTRA_FILESYSTEM_LIST=lustre get set by the intelmpi/4.0.1.007-* module (since 2011-02-02)

STAR-CCM+ (2010-11-12)

Here is a (partially tested) sample job script:

[bash]
#!/bin/bash -l
#PBS -l nodes=4:ppn=24
#PBS -l walltime=00:25:00
#PBS -N simXYZ
#PBS -j eo

# star-ccm+ arguments
CCMARGS=”-load simxyz.sim”

# specify the time you want to have to save results, etc.
TIME4SAVE=1200

# number of cores to use per node
PPN=12

# some MPI options; explicit pinning – must match the PPN line
MPIRUN_OPTIONS=”-cpu_bind=v,map_cpu:0,1,2,3,4,5,6,7,8,9,10,11″

### normally, no changes should be required below ###

module add star-ccm+/5.06.007

echo

# count the number of nodes
NODES=`uniq ${PBS_NODEFILE} | wc -l`
# calculate the number of cores actually used
CORES=$(( ${NODES} * ${PPN} ))

# check if enough licenses should be available
/apps/rrze/bin/check_lic.sh -c ${CDLMD_LICENSE_FILE} hpcdomains $(($CORES -1)) ccmpsuite 1
. /apps/rrze/bin/check_autorequeue.sh

# change to working directory
cd ${PBS_O_WORKDIR}

# generate new node file
for node in `uniq ${PBS_NODEFILE}`; do
echo “${node}:${PPN}”
done > pbs_nodefile.${PBS_JOBID}

# some exit/error traps for cleanup
trap ‘echo; echo “*** Signal TERM received: `date`”; echo; rm pbs_nodefile.${PBS_JOBID}; exit’ TERM
trap ‘echo; echo “*** Signal KILL received: `date`”; echo; rm pbs_nodefile.${PBS_JOBID}; exit’ KILL

# automatically detect how much time this batch job requested and adjust the
# sleep accordingly
export TIME4SAVE
( sleep ` qstat -f ${PBS_JOBID} | awk -v t=${TIME4SAVE} \
‘{if ( $0 ~ /Resource_List.walltime/ ) \
{ split($3,duration,”:”); \
print duration[1]*3600+duration[2]*60+duration[3]-t }}’ ` && \
touch ABORT ) >& /dev/null &
SLEEP_ID=$!

# start STAR-CCM+
starccm+ -batch -rsh ssh -mppflags “$MPIRUN_OPTIONS” -np ${CORES} -machinefile pbs_nodefile.${PBS_JOBID} ${CCMARGS}

# final clean up
rm pbs_nodefile.${PBS_JOBID}
pkill -P ${SLEEP_ID}
[/bash]

Solved issues

  • cmake – (2010-10-26) – rebuilt from sources; should work now without dirty LD_LIBRARY_PATH settings
  • svn – (2010-10-26) – installed CentOS rpm on the login nodes; the version unfortunately is a little bit old (1.4.2); however, there is no real chance to get a newer on on LiMa. Goto cshpc to have at least 1.5.0
  • qsub from compute nodes – (2010-10-26) – should work now; added allow_node_submit=true to PBS server
  • vim – (2010-10-27) – installed CentOS rpm of vim-enhanced on the login nodes
  • autologout – (2010-10-27) – will be disabled for CSH once /apps/rrze/etc/use-rrze-modules.csh is sourced
  • xmgrace, gnuplot, (xauth) – (2010-11-02) – installed on the login nodes; version from CentOS rpm
  • clock skew on /lxfs ($FASTTEMP) is now hopefully really fixed (2010-11-04)
  • rsh and rsh-server have been uninstalled from compute/login/admin nodes (2010-11-05)
  • several usability improvements added to use-rrze-modules.csh/sh (2010-11-19)