This is an old revision of the document!
−Table of Contents
Information about the HPC-Cluster
In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please give some information on the kind of jobs you are planing to run and software you plan to use, if possible.
2014-06-11: Currently there are two different HPC clusters in production at the physics department. The old cluster is still running Debian/Squeeze and uses Torque/MAUI as queueing system, while the new cluster is running Debian/Wheezy an uses Slurm as queueing system. Despite the fact that documentation for the new cluster is far from complete, we will create new accounts on the new system only. If you absolutely need access to the old system for some reason, drop us a note at hpc@physik.fu-berlin.de.
Slurm documentation (new)
- Important notes on cluster usage
- Start with the Introduction to the Slurm HPC cluster.
Torque documentation (old)
- Use local storage on the nodes if at all possible.
- Start with the Introduction to the Torque HPC cluster.
General documentation
- Robert Hübener from AG-Eisert has written a HOWTO for using Mathematica on a HPC-Cluster.
- A current python version as been built for cluster usage. The Python on the HPC-Cluster tutorial describes how to set it up.
Overview of available resources
The following table lists all HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT soroban cluster. The torque system will soon be replaced by SLURM. All torque nodes are running Debian/Squeeze, while all SLURM nodes are running Debian/Wheezy.
Hosts | Manager | Nodes | Form | Hardware | CPU | Speed | Core/Node | RAM/Core | RAM/Node | #RAM | #Cores |
---|---|---|---|---|---|---|---|---|---|---|---|
FB Physik - Standort Takustrasse 9 | |||||||||||
n010-n041 | offline | 32 | 2U Twin2 | Dell C6100 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 96G | 3072G | 384 |
n110-n111 | offline | 2 | 2U | Dell C6145 | 4x Opteron 6128HE | 2.0GHz | 32 | 4G | 128G | 256G | 64 |
n112-n127 | offline | 16 | Blade | Dell M600 | 2x Xeon E5450 | 3.00GHz | 8 | 2G | 16G | 256G | 128 |
n128-n143 | offline | 16 | Blade | Dell M600 | 2x Xeon E5450 | 3.00GHz | 8 | 2G | 16G | 256G | 128 |
n144-n175 | offline | 32 | Blade | Dell M610 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 1536G | 256 |
n176-n183 | offline | 8 | 4U | HP DL580 | 4x Xeon X7560 | 2.26Ghz | 32 | 8G | 256G | 2048G | 256 |
#Taku9 | 106 | 7424G | 1216 | ||||||||
FB Physik - Standort HLRN | |||||||||||
x001-x192 | SLURM1) | 192 | Blade | SGI Altix ICE 8200 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 9216G | 1536 |
uv1000 | none | 1 | 42U | SGI UV 1000 | 64x Xeon X7560 | 2.26Ghz | 512 | 4G | 2T | 2048G | 512 |
#HLRN | 193 | 11264G | 2048 | ||||||||
FB Physik - Standort ZEDAT | |||||||||||
y001-y128 | SLURM1) | 128 | Blade | HP BL460c G6 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 6144G | 1024 |
ygpu01-ygpu31 | SLURM2) | 31 | 2U GPU Nodes (2x Nvidia Tesla M2070) | IBM iDataPlex dx360 M3 | 2x Xeon X5570 | 2.93GHz | 8 | 3G | 24G | 744G | 248 |
#Ph-ZEDAT | 159 | 6888G | 1272 | ||||||||
ZEDAT-HPC | |||||||||||
node001-002 | SLURM | 2 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 48G | 96G | 24 |
node003-030 | SLURM | 28 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 4G | 24G | 672G | 336 |
node031-100 | SLURM | 70 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 48G | 3360G | 840 |
node101-112 | SLURM | 12 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 16G | 96G | 1152G | 144 |
#ZEDAT | 112 | 5280G | 1344 | ||||||||
ausgemusterte Systeme | |||||||||||
Abacus4 | 8 | IBM p575 | 16x POWER 5+ | 1.9Ghz | 32 | 4G | 128G | 1024G | 256 |
Operating System: Debian Linux Squeeze (x64)
1) in production but still experimental
2) work in progress
(15.03.2014)