Johannes Habichs Blog

Suche


21. March 2012

HIWI Positions available at RRZE’s HPC group

Johannes Habich, 08:00 Uhr in Allgemeines, HPC

Are you  a student at FAU or OHN and  interested in HPC? There are open HiWi positions in our group!

 

Current topics:

 

Just write an Email to hpc “at” rrze.uni-erlangen.de and ask for our current research topics.

 

 

 

Status: Open ; Last update 21.03.2012

22. December 2011

GETOPT for Windows

Johannes Habich, 11:47 Uhr in HPC, Tools, Windows HPC

The portability of programs between *nix und Windows is often limited not by the program itself but by certain helper tools. One of those is getopt which provides a fast and easy way to read in command line arguments.

Please find a windows version here: http://suacommunity.com/dictionary/getopt-entry.php

 

You will find more other useful ports here as well.

8. November 2011

WINDOWS: Build Boost for Visual Studio 2010

Johannes Habich, 12:04 Uhr in HPC, Windows HPC

Source:

http://stackoverflow.com/questions/2629421/how-to-use-boost-in-visual-studio-2010

If you are using headers only libraries, then all you need to do is to unarchive the boost download and set up the environment variables. The instruction below set the environment variables for Visual Studio only, and not across the system as a whole. Note you only have to do it once.

  1. Unarchive the latest version of boost (1.47.0 as of writing) into a directory of your choice (e.g.C:\boost_1_47_0).
  2. Create a new empty project in Visual Studio.
  3. Open the Property Manager and expand one of the configuration for the platform of your choice.
  4. Select & right click Microsoft.Cpp.<Platform>.user, and select Properties to open the Property Page for edit.
  5. Select VC++ Directories on the left.
  6. Edit the Include Directories section to include the path to your boost source files.
  7. Repeat steps 3 – 6 for different platform of your choice if needed.

If you want to use the part of boost that require building, but none of the features that requires external dependencies, then building it is fairly simple.

  1. Unarchive the latest version of boost (1.47.0 as of writing) into a directory of your choice (e.g.C:\boost_1_47_0).
  2. Start the Visual Studio Command Prompt for the platform of your choice and navigate to where boost is.
  3. Run: bootstrap.bat to build b2.exe (previously named bjam).
  4. Run b2: (Win32) b2 --toolset=msvc-10.0 --build-type=complete stage ; (x64) b2 --toolset=msvc-10.0 --build-type=complete architecture=x86 address-model=64 stage. Go for a walk / watch a movie or 2 / ….
  5. Go through steps 2 – 6 from the set of instruction above to set the environment variables.
  6. Edit the Library Directories section to include the path to your boost libraries output. (The default for the example and instructions above would be C:\boost_1_47_0\stage\lib. Rename and move the directory first if you want to have x86 & x64 side by side (such as to<BOOST_PATH>\lib\x86 & <BOOST_PATH>\lib\x64).
  7. Repeat steps 2 – 6 for different platform of your choice if needed.
If you want both x64 & win32 side by side, add “–stagedir=lib/win32″ and “–stagedir=lib/x64″ to the respective builds

4. August 2011

TinyGPU upgrade to CUDA Toolkit 4.0

Johannes Habich, 07:07 Uhr in CUDA, HPC

All nodes of the TinyGPU Cluster is now on the current CUDA Toolkit 4.0 and the appropriate driver.

20. June 2011

7th Erlangen International High-End-Computing Symposium

Johannes Habich, 15:23 Uhr in Allgemeines, HPC

Please note the upcoming 7th EIHECS on this weeks Friday 24th of June .

15. October 2010

Lectures and seminars of the HPC group

Johannes Habich, 07:55 Uhr in Allgemeines

Interested in topics in and around HPC for your studies?

Than have a look at the official homepage .

Find all lectures and seminars here.

13. October 2010

NVIDIA CUDA TCC Driver Released 260.83

Johannes Habich, 08:03 Uhr in CUDA, HPC, Windows HPC

Just today Nvidia released the WHQL certified Tesla Compute Cluster driver TCC 260.83 for usage in e.g. Windows 2008 Server/HPC.
Till now only a beta version was available
With that special driver you have the ability to use GPGPU compute resources via RDP or via WindowsHPC batch processing mode.

Download the driver here

/edit:
Actually installing this driver broke my working environment. So be sure to keep a backup of the system. Even reinstalling the beta version did not solve the problem.

12. October 2010

Win2008 HPC Server and CUDA TCC

Johannes Habich, 13:54 Uhr in CUDA, HPC, Windows HPC

Nvidia now provides a beta driver called Tesla Compute Cluster (TCC) in order to use CUDA GPUs within a windows cluster environment. Not only remotely via RDP but also in batch processing. Till now, the HPCServer lacked this ability, as Windows did not fire up the graphics driver inside the limited batch logon mode.

My first steps with TCC took a little bit longer than estimated.

First of all It is not possible to have a NVIDIA and AMD or INTEL GPU side by side as Windows needs to use one unified WDM and thats either one or the other vendor. This was completely new to me.

After this first minor setback and reequipped with only the tesla C2050 the BIOS did not finish, so be sure to be up to date with your BIOS revision.
Another NVIDIA card was the quick fix on my side.

Next thing is the setup. Install the 260 (currently beta) drivers and the exclamation mark in the device manager should vanish.
After that install the toolkit and SDK if you like.
With the nvidia-smi tool, which you find in one of the uncountable NVIDIA folders which are there now, you can have a look if the card is initally correctly recognized.
As well set the TCC mode of the Tesla card to enabled if you want to have remote cuda capabilities:

nvidia-smi -s –> shows the current status
nvidia-smi -g 0 -c 1 –> enables TCC on GPU 0

Next thing you want to test the device query coming with the SDK.
If it runs and everything looks fine, feel gifted!

Nothing did run on my setup. So first of all I tried to build the SDK example myself. Therefore first of all build the Cuda utilities, lying somewhere in the SDK within the folder “common”.
Depending on the Nsight or TK version you have installed you get an error when opening the VS project fles . The you need to edit the visual studio with a text editor of your choice and replace the outdated build rule with the one actually installed.

  • In the error message get to the folder where VS does not find the file.
  • Copy the path and go there with your file browser
  • Find the file most equal to the one in the VS error message.
  • Once found open the VS file and replace the wrong filename there with the correct one
  • VS should open

In order to compile, add the correct include and library directories to the VS project.
Finally you can build deviceQuery or any other program.

Still this setup gave me the same error as the precompiled deviceQuery:
cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched.

With the help of the DependencyWalker i found out that a missing DLL was the problem, namely:
linkinfo.dll.

You can get this by adding the feature named “Desktop Experience” through the server manager.
Once installed and rebooted the device query worked.

16. September 2010

SKALB Conferences

Johannes Habich, 06:49 Uhr in SKALB

Conferences I’ve presented SKALB related results.

19. June 2009

MPI/OpenMP Hybrid pinning or no pinning that is the question

Johannes Habich, 14:34 Uhr in HPC, Parallel Computing MPI/OpenMP

Intro

Recently performed benchmarks of a hybrid-parallelized flow solver showed what one has to consider in order to get best performance.
On the theoretical side, hybrid implementations are thought to be most flexible and still maintain high performance. This is because, one thinks that OpenMP is perfect for intranode communication and faster than MPI there.
Between nodes, MPI anyway is the choice for portable distributed memory parallelization.

In reality however not few MPI implementations already use shared memory buffers when communicating with other ranks in the same shared memory system. So basically there is no advantage between a parallelization of MPI and OpenMP on the same level, when using MPI nevertheless for internode communication.

Quite contrary, it means apart from the additional implementation, a lot of more understanding of processor and memory hierarchy layout, thread and process affinity to cores than the pure MPI implementation.

Nevertheless there are scenarios, where hybrid really can pay off, as MPI lacks the OpenMP feature of accessing shared data in shared caches for example.

Finally if you want to run your hybrid code on RRZE systems, there are the following features available.

Pinning of MPI/OpenMP hybrids

I assume you use the mpirun wrapper provided

  • mpirun -pernode issues just one MPI process per node regardless of the nodefile content
  • mpirun -npernode issues just n MPI processes per node regardless of the nodefile content
  • mpirun -npernode 2 -pin 0,1_2,3 env LD_PRELOAD=/apps/rrze/lib/ptoverride-ubuntu64.so Issues 2 MPI processes per node and gives threads of the MPI process 0 just access to core 0 and 1 and threads of MPI process 1 access to cores 2 and 3 (of course the MPI processes themselves are also limited to that cores). Furthermore the , e.g. OpenMP, threads are pinned to one core only, so that migration is no longer an issue
  • mpirun -npernode 2 -pin 0_1_2_3 is your choice if you would like to test 1 OpenMP thread per MPI process and 4 MPI processes in total per node. Adding the LD_PRELOAD from above however decreases performance a lot. This is currently under investigation.
  • export PINOMP_MASK=2 changes the skip mask of the pinning tool

OpenMP spawns not only worker threads but also threads for administrative business as synchronization etc. Usually you would only pin the threads, contributing to the computation. The default skip mask, skipping the non computationally intensive threads, might not be correct in the case of hybrid programming, as MPI as well spawns non-worker threads. The PINOMP_MASK variable is hereby interpreted like a bitmask, e.g. 2 –> 10 and 6 –> 110. A zero means to pin the thread and a 1 means to skip the pinning of the thread. The least significant bit hereby corresponds to thread zero (bit 0 is 0 in the examples above ) .

    6 was used in the algorithm under investigation as soon as one MPI process and 4 OpenMP worker threads were used per node, to have the correct thread pinning.

The usage of the rankfile for pinning hybrid jobs is described in Thomas Zeisers Blog

Thanks to Thomas Zeiser and Michael Meier for their help in resolving this issue.

Keyowords: Thread Pinning Hybrid Affinity

Incorporated Comments of Thomas Zeiser

Thomas Zeiser, Donnerstag, 23. Juli 2009, 18:15

PINOMP_MASK for hybrid codes using Open-MPI

If recent Intel compilers and Open-MPI are used for hybrid OpenMP-MPI programming, the correct PINOMP_MASK seems to be 7 (instead of 6 for hybrid codes using Intel-MPI).

Thomas Zeiser, Montag, 22. Februar 2010, 20:41
PIN OMP and recent mvapich2

Also recent mvapich2 requires special handling for pinning hybrid codes: PINOMP_SKIP=1,2 seems to be appropriate.

Nach oben