services:cluster:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
services:cluster:start [2018/11/06 10:36] – [Table] dreger | services:cluster:start [2022/07/12 15:49] (current) – [Slurm documentation] zedv | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Information about the HPC-Cluster ====== | ====== Information about the HPC-Cluster ====== | ||
+ | |||
+ | <note tip>If you have questions, you can find us on [[https:// | ||
+ | |||
+ | ===== Access to the Cluster ===== | ||
In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information: | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information: | ||
Line 7: | Line 11: | ||
- The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/ | - The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/ | ||
- Software that you happen to know so well that other HPC users within the department may ask you for help. | - Software that you happen to know so well that other HPC users within the department may ask you for help. | ||
- | - A self contained | + | - A self-contained example |
- If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | ||
- | |||
- | **2018-06-28**: | ||
- | |||
===== Slurm documentation ===== | ===== Slurm documentation ===== | ||
Line 21: | Line 22: | ||
* Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | * Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | ||
* Here is a [[userlist|list of HPC users]] and the software they use | * Here is a [[userlist|list of HPC users]] and the software they use | ||
+ | * Using [[sheldon-gpu|GPU nodes on sheldon]] | ||
===== General documentation ===== | ===== General documentation ===== | ||
* Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | * Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | ||
- | * A current | + | * A more current |
* Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | * Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | ||
Line 31: | Line 33: | ||
The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | ||
+ | |||
+ | The name of the login node for each of our clusters has the same name as the cluster, e.g. the tron login node is reachable via ssh under the hostname '' | ||
^ Hosts ^ Manager | ^ Hosts ^ Manager | ||
| @# | | @# | ||
- | | @#cfc:z001-z079 | SLURM | | + | | @#cfc:z001-z020 | SLURM | |
- | | @#cfc:z081-z113 | SLURM | | + | | @# |
+ | | @#cfc:z041-z113 | SLURM | | ||
| @# | | @# | ||
| @# | | @# | ||
| | | | | | | | ||
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | | + | |
| @# | | @# | ||
| @# | | @# |
services/cluster/start.1541500593.txt.gz · Last modified: 2018/11/06 10:36 by dreger