User Tools

Site Tools


services:cluster:start

Information about the HPC-Cluster

If you have questions, you can find us on Matrix in #hpc:physik.fu-berlin.de

Access to the Cluster

In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information:

  1. Your ZEDAT account username
  2. The group you are using the system for (e.g. ag-netz,ag-imhof,…)
  3. The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, …) and whether you use MPI or OpenCL/CUDA.
  4. Software that you happen to know so well that other HPC users within the department may ask you for help.
  5. A self-contained example job that is typical for the workload you will be using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum.
  6. If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper)

Slurm documentation

General documentation

Overview of available resources

The current cluster is sheldon, please connect to this login node while we update the below section

The following table lists some HPC resources available at the physics department. The tron cluster at Takustrasse 9 is currently being restructured. We also have some special purpose nodes that are currently not managed by Slurm.

The name of the login node for each of our clusters has the same name as the cluster, e.g. the tron login node is reachable via ssh under the hostname tron.

Hosts Manager Nodes Form Hardware CPU Speed Core/Node RAM/Core RAM/Node #RAM #Cores
tron cluster - FB Physik - Location: Takustrasse 9 - OS: Debian/Stretch
z001-z020 SLURM 20 1U IBM iDataPlex dx360 M4 2x Xeon E5-2680v2 2.8GHz 20 25G 512G 10024G 400
z021-z040 SLURM 20 1U IBM iDataPlex dx360 M4 2x Xeon E5-2680v2 2.8GHz 20 12G 256G 5120G 400
z041-z113 SLURM 72 2U GPU Nodes (2x Nvidia Tesla K20x) IBM iDataPlex dx360 M4 2x Xeon E5-2680v2 2.8GHz 20 6G 128G 9216G 1440
z163-z166 SLURM 4 2U HP DL560 G8 4x Xeon E5-4650L 2.6GHz 32 24G 768G 3072G 128
#Taku9 G

(06.11.2018)

services/cluster/start.txt · Last modified: 2024/09/12 07:50 by behrmj87

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki