====== Information about the HPC-Cluster ======
If you have questions, you can find us on [[https://meet.physik.fu-berlin.de/#/room/!lwzXdWYwaTwKKSKfAb:physik.fu-berlin.de?via=physik.fu-berlin.de|Matrix in #hpc:physik.fu-berlin.de]]
===== Access to the Cluster =====
In order to get access to the department of physics HPC resources you need to send an email to [[hpc@physik.fu-berlin.de]]. Please supply the following information:
- Your ZEDAT account username
- The group you are using the system for (e.g. ag-netz,ag-imhof,...)
- The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/CUDA.
- Software that you happen to know so well that other HPC users within the department may ask you for help.
- A self-contained example job that is typical for the workload you will be using the HPC systems for, ideally **with a small README** describing how to run it **and a job script**. If possible scale it so it runs between a few minutes and an hour at maximum.
- If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper)
===== Slurm documentation =====
* [[important|Important notes]] on cluster usage
* Start with the [[slurm|Introduction to the Slurm HPC cluster]].
* Using [[interactivesessions|interactive sessions]] with the queuing system.
* How to make use of the [[gpunodes|GPU-nodes]].
* Here is a [[nodes|list of special nodes]] that are currently not part of slurm.
* Here is a [[userlist|list of HPC users]] and the software they use
===== General documentation =====
* Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]].
* A more current Python version has been built for cluster usage. The [[pythoncluster|Python on the HPC-Cluster]] tutorial describes how to set it up.
===== Overview of available resources =====
The current cluster is ''sheldon'', please connect to this login node while we update the below section
The following table lists some HPC resources available at the physics department. The tron cluster at Takustrasse 9 is currently being restructured. We also have some [[nodes|special purpose nodes]] that are currently not managed by Slurm.
The name of the login node for each of our clusters has the same name as the cluster, e.g. the tron login node is reachable via ssh under the hostname ''tron''.
^ Hosts ^ Manager ^ Nodes ^ Form ^ Hardware ^ CPU ^ Speed ^ Core/Node ^ RAM/Core ^ RAM/Node ^ #RAM ^ #Cores ^
| @#cfc:**tron cluster** - FB Physik - Location: Takustrasse 9 - OS: Debian/Stretch ||||||||||||
| @#cfc:z001-z020 | SLURM | 20 | 1U | IBM iDataPlex dx360 M4 | 2x Xeon E5-2680v2 | 2.8GHz | 20 | 25G | 512G | 10024G | 400 |
| @#cfc:z021-z040 | SLURM | 20 | 1U | IBM iDataPlex dx360 M4 | 2x Xeon E5-2680v2 | 2.8GHz | 20 | 12G | 256G | 5120G | 400 |
| @#cfc:z041-z113 | SLURM | 72 | 2U GPU Nodes (2x Nvidia Tesla K20x) | IBM iDataPlex dx360 M4 | 2x Xeon E5-2680v2 | 2.8GHz | 20 | 6G | 128G | 9216G | 1440 |
| @#cfc:z163-z166 | SLURM | 4 | 2U | HP DL560 G8 | 4x Xeon E5-4650L | 2.6GHz | 32 | 24G | 768G | 3072G | 128 |
| @#cfc:**#Taku9** | | **~~=sum(range(col(),1,col(),row()-1))~~** | | | | | | | | **~~=sum(range(col(),1,col(),row()-1))~~G** | **~~=sum(range(col(),1,col(),row()-1))~~** |
| | | | | | | | | | | | |
(06.11.2018)
{{:fotos:dsc_0445wiki.jpg?width=370|}}{{:fotos:dsc_0450.jpg?width=370|}}
{{:fotos:dsc_0446.jpg?width=740|}}