services:cluster:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
services:cluster:start [2015/06/16 11:59] – [Table] dreger | services:cluster:start [2022/07/12 15:49] – [Slurm documentation] zedv | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Information about the HPC-Cluster ====== | ====== Information about the HPC-Cluster ====== | ||
- | In order to get access to the department of physics HPC resources | + | <note tip> |
- | **2014-06-11**: | + | ===== Access |
- | ===== Slurm documentation | + | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information: |
+ | |||
+ | - Your ZEDAT account username | ||
+ | - The group you are using the system for (e.g. ag-netz, | ||
+ | - The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/ | ||
+ | - Software that you happen to know so well that other HPC users within the department may ask you for help. | ||
+ | - A self-contained example job that is typical for the workload you will be using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum. | ||
+ | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | ||
+ | |||
+ | ===== Slurm documentation ===== | ||
* [[important|Important notes]] on cluster usage | * [[important|Important notes]] on cluster usage | ||
* Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | * Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | ||
- | + | * Using [[interactivesessions|interactive sessions]] with the queuing system. | |
- | ===== Torque documentation (old) ===== | + | * How to make use of the [[gpunodes|GPU-nodes]]. |
- | + | * Here is a [[nodes|list of special | |
- | * Use [[localstorage|local storage on the nodes]] | + | * Here is a [[userlist|list of HPC users]] and the software they use |
- | * Start with the [[queuing-system|Introduction to the Torque HPC cluster]]. | + | * Using [[sheldon-gpu|GPU nodes on sheldon]] |
===== General documentation ===== | ===== General documentation ===== | ||
* Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | * Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | ||
- | * A current | + | * A more current |
+ | * Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | ||
===== Overview of available resources ===== | ===== Overview of available resources ===== | ||
- | The following table lists all HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | + | The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// |
- | + | ||
- | ^ Hosts ^ Manager | + | |
- | | @#cfc:**FB Physik - Standort Takustrasse 9** | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @#ced:**FB Physik - Standort HLRN** | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @#cef:**FB Physik - Standort ZEDAT** | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @# | + | |
- | | @# | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
+ | The name of the login node for each of our clusters has the same name as the cluster, e.g. the tron login node is reachable via ssh under the hostname '' | ||
+ | ^ Hosts ^ Manager | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
- | Operating System: Debian Linux Squeeze (x64)\\ | ||
- | < | ||
- | < | ||
- | (15.03.2014) | + | (06.11.2018) |
{{: | {{: | ||
{{: | {{: |
services/cluster/start.txt · Last modified: 2024/07/01 11:39 by zedv