This is an old revision of the document!
Table of Contents
Information about the HPC-Cluster
In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information:
- The group you are using the system for (e.g. ag-netz,ag-imhof,…)
- The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, …) and whether you use MPI or OpenCL/CUDA.
- Software that you happen to know so well that other HPC users within the department may ask you for help.
- A self contained job example that is typical for the workload you are using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum.
- If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper)
2015-06-16: Currently there are two HPC clusters in production at the physics department. One is located in the HLRN datacenter, the other is located at ZEDAT. See the table below for further information on available resources. Both cluster only share a common /home. Everything else like the queuing system ot /scratch are distinct. Both are running Debian/Wheezy and utilize the Slurm scheduling system.
Slurm documentation
- Important notes on cluster usage
- Start with the Introduction to the Slurm HPC cluster.
- Using interactive sessions with the queuing system.
- How to make use of the GPU-nodes.
- Here is a list of special nodes that are currently not part of slurm.
- Here is a list of HPC users and the software they use
General documentation
- Robert Hübener from AG-Eisert has written a HOWTO for using Mathematica on a HPC-Cluster.
- A current python version has been built for cluster usage. The Python on the HPC-Cluster tutorial describes how to set it up.
Overview of available resources
The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT soroban cluster. The tron cluster at Takustrasse 9 is currently restructured. We also have some special purpose nodes that are currently not managed by Slurm.
Hosts | Manager | Nodes | Form | Hardware | CPU | Speed | Core/Node | RAM/Core | RAM/Node | #RAM | #Cores |
---|---|---|---|---|---|---|---|---|---|---|---|
tron cluster - FB Physik - Location: Takustrasse 9 | |||||||||||
z001-z079 | SLURM | 79 | 1U | IBM iDataPlex dx360 M4 | 2x Xeon E5-2680v2 | 2.8GHz | 20 | 8G | 128G | 10112G | 1580 |
#Taku9 | G | ||||||||||
sheldon/leonard cluster - FB Physik - Location: HLRN - OS: Debian/Jessie | |||||||||||
x001-x192 | SLURM1) | 192 | Blade | SGI Altix ICE 8200 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 9216G | 1536 |
#HLRN | 192 | 9216G | 1536 | ||||||||
yoshi cluster - FB Physik - Location: ZEDAT - OS: Debian/Wheezy | |||||||||||
y001-y160 | SLURM1) | 160 | Blade | HP BL460c G6 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 7680G | 1280 |
ygpu01-ygpu31 | SLURM2) | 31 | 2U GPU Nodes (2x Nvidia Tesla M2070) | IBM iDataPlex dx360 M3 | 2x Xeon X5570 | 2.93GHz | 8 | 3G | 24G | 744G | 248 |
#Ph-ZEDAT | 159 | 8424G | 1528 | ||||||||
soroban cluster - ZEDAT-HPC - Location: ZEDAT | |||||||||||
node001-002 | SLURM | 2 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 48G | 96G | 24 |
node003-030 | SLURM | 28 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 4G | 24G | 672G | 336 |
node031-100 | SLURM | 70 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 48G | 3360G | 840 |
node101-112 | SLURM | 12 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 16G | 96G | 1152G | 144 |
#ZEDAT | 112 | 5280G | 1344 | ||||||||
Abacus4 | 8 | IBM p575 | 16x POWER 5+ | 1.9Ghz | 32 | 4G | 128G | 1024G | 256 |
Operating System: Debian Linux Squeeze (x64)
1) in production but still experimental
2) work in progress
(15.03.2014)