User Tools

Site Tools


services:cluster:start

This is an old revision of the document!


Information about the HPC-Cluster

In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information:

  1. Your ZEDAT account username
  2. The group you are using the system for (e.g. ag-netz,ag-imhof,…)
  3. The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, …) and whether you use MPI or OpenCL/CUDA.
  4. Software that you happen to know so well that other HPC users within the department may ask you for help.
  5. A self contained job example that is typical for the workload you are using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum.
  6. If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper)

2018-06-28: Currently there are three HPC clusters in production at the physics department. One cluster ("leonard") is located in the HLRN datacenter, another ("yoshi") is located at ZEDAT and the third ("tron") in located in the IMP datacenter at Takustraße 9. See the table below for further information on available resources. The yoshi cluster is currently under maintenance for upgrade of the system.

Slurm documentation

General documentation

Overview of available resources

The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT soroban cluster. The tron cluster at Takustrasse 9 is currently restructured. We also have some special purpose nodes that are currently not managed by Slurm.

Hosts Manager Nodes Form Hardware CPU Speed Core/Node RAM/Core RAM/Node #RAM #Cores
tron cluster - FB Physik - Location: Takustrasse 9 - OS: Debian/Stretch
z001-z079 SLURM 79 1U IBM iDataPlex dx360 M4 2x Xeon E5-2680v2 2.8GHz 20 8G 128G 10112G 1580
z081-z113 SLURM 33 2U GPU Nodes (2x Nvidia Tesla K20x) IBM iDataPlex dx360 M4 2x Xeon E5-2680v2 2.8GHz 20 8G 128G 4224G 660
z163-z166 SLURM 4 2U HP DL560 G8 4x Xeon E5-4650L 2.6GHz 32 24G 768G 3072G 128
#Taku9 G
sheldon/leonard cluster - FB Physik - Location: HLRN - OS: Debian/Jessie
x001-x192 SLURM1) 192 Blade SGI Altix ICE 8200 2x Xeon X5570 2.93GHz 8 6G 48G 9216G 1536
#HLRN 192 9216G 1536
yoshi cluster - FB Physik - Location: ZEDAT - OS: Debian/Wheezy
y001-y160 SLURM1) 160 Blade HP BL460c G6 2x Xeon X5570 2.93GHz 8 6G 48G 7680G 1280
ygpu01-ygpu31 SLURM2) 31 2U GPU Nodes (2x Nvidia Tesla M2070) IBM iDataPlex dx360 M3 2x Xeon X5570 2.93GHz 8 3G 24G 744G 248
#Ph-ZEDAT 159 8424G 1528
soroban cluster - ZEDAT-HPC - Location: ZEDAT
node001-002 SLURM 2 1U Twin Asus Z8NH-D12 2x Xeon X5650 2.66GHz 12 8G 48G 96G 24
node003-030 SLURM 28 1U Twin Asus Z8NH-D12 2x Xeon X5650 2.66GHz 12 4G 24G 672G 336
node031-100 SLURM 70 1U Twin Asus Z8NH-D12 2x Xeon X5650 2.66GHz 12 8G 48G 3360G 840
node101-112 SLURM 12 1U Twin Asus Z8NH-D12 2x Xeon X5650 2.66GHz 12 16G 96G 1152G 144
#ZEDAT 112 5280G 1344
Abacus4 8 IBM p575 16x POWER 5+ 1.9Ghz 32 4G 128G 1024G 256

Operating System: Debian Linux Squeeze (x64)
1) in production but still experimental
2) work in progress

(15.03.2014)

services/cluster/start.1541500446.txt.gz · Last modified: 2018/11/06 10:34 by dreger

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki