User Tools

Site Tools


services:cluster:start

This is an old revision of the document!


Information about the HPC-Cluster

If you have questions, you can find us on Matrix in #hpc:physik.fu-berlin.de

Access to the Cluster

In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information:

  1. Your ZEDAT account username
  2. The group you are using the system for (e.g. ag-netz,ag-imhof,…)
  3. The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, …) and whether you use MPI or OpenCL/CUDA.
  4. Software that you happen to know so well that other HPC users within the department may ask you for help.
  5. A self-contained example job that is typical for the workload you will be using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum.
  6. If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper)

Slurm documentation

Overview of available resources

The current cluster is sheldon, please connect to this login node while we update the below section

The following table lists some HPC resources available at the physics department. The tron cluster at Takustrasse 9 is currently being restructured. We also have some special purpose nodes that are currently not managed by Slurm.

The name of the login node for each of our clusters has the same name as the cluster, e.g. the tron login node is reachable via ssh under the hostname tron.

Hosts Nodes Cores/Node RAM/Core RAM/Node CPU features GPU on-GPU RAM #Cores #RAM #GPU
sheldon-ng cluster - FB Physik - Location: Takustraße 7 - OS: Debian/Bookworm
x[001-016,049-160] 128 24 5.2GB 125GB x86-64-v2 3072 16000GB 0
x[017-048] 32 24 20.9GB 502GB x86-64-v2 768 16064GB 0
x[161-176] 16 24 5.2GB 125GB x86-64-v3 384 2000GB 0
sheldon,x[177-178,180-222] 45 24 42.0GB 1007GB x86-64-v3 1080 45315GB 0
xq[01-10] 10 128 2.0GB 250GB x86-64-v3 2x A5000 24GB 1280 2500GB 2
xgpu[01-05,07-13] 12 16 11.7GB 187GB x86-64-v4 4x nVidia RTX 2080 TI 11GB 192 2244GB 4
xgpu06 1 16 11.2GB 179GB x86-64-v4 4x nVidia RTX 2080 TI 11GB 16 179GB 4
xgpu[14-23] 10 16 11.7GB 187GB x86-64-v4 4x A5000 24GB 160 1870GB 4
xgpu[24-25] 2 16 11.7GB 187GB x86-64-v3 4x nVidia RTX 3090 24GB 32 374GB 4
xgpu26 1 64 2.0GB 125GB x86-64-v3 10x A5000 24GB 64 125GB 10
xgpu28 1 24 10.4GB 250GB x86-64-v3 4x nVidia RTX A600 Ada 48GB 24 250GB 4
xgpu[29-33] 5 24 5.2GB 125GB x86-64-v3 4x nVidia Titan V 12GB 120 625GB 4
xgpu[27,34-52,54-56,58,62] 25 24 5.2GB 125GB x86-64-v3 4x A5000 24GB 600 3125GB 4
xgpu57 1 24 5.2GB 125GB x86-64-v3 4x nVidia RTX A600 48GB 24 125GB 4
xgpu[59-61] 3 36 41.9GB 1509GB x86-64-v4 8x nVidia Tesla P100 16GB 108 4527GB 8
xgpu63 1 24 5.2GB 125GB x86-64-v3 4x nVidia RTX A4500 Ada 24GB 24 125GB 4
#Taku 7 293 7948 95448GB 56

(07.02.2025)

services/cluster/start.1738937865.txt.gz · Last modified: 2025/02/07 14:17 by behrmj87

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki