services:cluster:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
services:cluster:start [2019/02/04 14:30] – [Table] dreger | services:cluster:start [2025/02/07 16:46] (current) – fix math in table behrmj87 | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Information about the HPC-Cluster ====== | ====== Information about the HPC-Cluster ====== | ||
- | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information: | + | <note tip>If you have questions, you can find us on [[https:// |
+ | |||
+ | ===== Access to the Cluster ===== | ||
+ | |||
+ | In order to get access to the department of physics HPC resources you need to send an email to [[hpc@physik.fu-berlin.de]]. Please supply the following information: | ||
- Your ZEDAT account username | - Your ZEDAT account username | ||
- | - The group you are using the system for (e.g. ag-netz,ag-imhof,...) | + | - The group you are using the system for (e.g. AG Netz, AG Eisert, AG Franke…) |
- | - The software you are using for your simulations | + | - The software you are using for your numerics |
- Software that you happen to know so well that other HPC users within the department may ask you for help. | - Software that you happen to know so well that other HPC users within the department may ask you for help. | ||
- | - A self contained | + | - A self-contained example |
- | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | + | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper). |
- | **2018-06-28**: | + | The example must contain: |
- | + | ||
+ | - A small README | ||
+ | - a Slurm job script, and | ||
+ | - the program that is run in the example and/or all input files needed to run it, this includes data files and definitions for the environment the job is to run in (e.g. a '' | ||
+ | |||
+ | If possible: | ||
+ | |||
+ | - The example should have an option to scale it so it runs between a few minutes and an hour at maximum, so that it can be used for benchmarking. | ||
+ | |||
+ | If you can't answer the questions for your example, these steps can help you answer them | ||
+ | |||
+ | - If you have written the code yourself, what dependecies does it have (e.g. Python libraries you import)? | ||
+ | - How long does your example run? | ||
+ | - How many CPUs and how much memory does the example need? | ||
+ | - Can the example' | ||
===== Slurm documentation ===== | ===== Slurm documentation ===== | ||
- | * [[important|Important notes]] on cluster | + | Read this for an introduction to Slurm queuing system, if you haven' |
* Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | * Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | ||
+ | |||
+ | Read this for some important notes on the specifics of our clusters. | ||
+ | |||
+ | * [[important|Important notes]] on cluster usage | ||
+ | |||
+ | These are more specialised topics: | ||
+ | |||
* Using [[interactivesessions|interactive sessions]] with the queuing system. | * Using [[interactivesessions|interactive sessions]] with the queuing system. | ||
- | * How to make use of the [[gpunodes|GPU-nodes]]. | ||
* Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | * Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | ||
* Here is a [[userlist|list of HPC users]] and the software they use | * Here is a [[userlist|list of HPC users]] and the software they use | ||
- | |||
- | ===== General documentation ===== | ||
- | |||
- | * Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | ||
- | * A current python version has been built for cluster usage. The [[pythoncluster|Python on the HPC-Cluster]] tutorial describes how to set it up. | ||
- | * Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | ||
===== Overview of available resources ===== | ===== Overview of available resources ===== | ||
- | The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | + | The following table lists some HPC resources available at the physics department. The tron cluster at Takustraße |
+ | |||
+ | The name of the login node for each of our clusters has the same name as the cluster, e.g. the sheldon login node is reachable via ssh under the hostname '' | ||
- | ^ Hosts ^ Manager | + | ^ Hosts ^ Nodes ^ Cores/Node ^ RAM/Core ^ RAM/Node ^ CPU features ^ GPU ^ on-GPU |
- | | @#cfc:**tron cluster** - FB Physik - Location: | + | | @# |
- | | @#cfc:z001-z020 | + | | @#cfc:x[001-016, |
- | | @#cfc:z021-z040 | + | | @#cfc:x[017-048] | 32 | 24 | 20.9GB | 502GB | x86-64-v2 | | | 768 | 16064GB |
- | | @#cfc:z041-z060 | + | | @#cfc:x[161-176] | 16 | 24 | 5.2GB | 125GB | x86-64-v3 | | | 384 | 2000GB |
- | | @#cfc:z061-z113 | + | | @#cfc:sheldon, |
- | | @#cfc:z163-z166 | + | | @#cfc:xq[01-10] | 10 | 128 | 2.0GB | 250GB | x86-64-v3 | 2x A5000 | 24GB | 1280 | 2500GB |
- | | @#cfc:**# | + | | @#cfc:xgpu[01-05,07-13] | 12 | 16 | 11.7GB |
- | | | | | + | | @#cfc:xgpu06 |
- | | @#ced:**sheldon/ | + | | @#cfc:xgpu[14-23] | 10 | 16 | 11.7GB |
- | | @#ced:x001-x192 | + | | @#cfc:xgpu[24-25] |
- | | @#ced:**# | + | | @#cfc:xgpu26 |
- | | | | | + | | @#cfc:xgpu28 |
- | | @#cef:**yoshi cluster** - FB Physik - Location: ZEDAT - OS: Debian/ | + | | @#cfc:xgpu[29-33] | 5 | 24 | 5.2GB | 125GB | x86-64-v3 | 4x nVidia Titan V | 12GB | 120 | 625GB | 20 | |
- | | @#cef:y001-y160 | + | | @#cfc:xgpu[27,34-52,54-56, |
- | | @#cef:ygpu01-ygpu31 | + | | @#cfc:xgpu57 |
- | | @#cef:**#Ph-ZEDAT** | + | | @#cfc:xgpu[59-61] | 3 | 36 | 41.9GB |
- | | | | | + | | @#cfc:xgpu63 |
- | | @#ccf:**soroban cluster** | + | | @#cfc:**#Taku 7** | **293** | | | | | | | **7948** |
- | | @#ccf:node001-002 | + | |
- | | @# | + | |
- | | @#ccf:node031-100 | + | |
- | | @#ccf:node101-112 | + | |
- | | @#ccf:**#ZEDAT** | + | |
- | | | | | + | |
- | | @# | + | |
- | (06.11.2018) | + | (07.02.2025) |
{{: | {{: | ||
{{: | {{: |
services/cluster/start.1549290623.txt.gz · Last modified: 2019/02/04 14:30 by dreger