services:cluster:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
services:cluster:start [2015/06/16 12:06] – [Information about the HPC-Cluster] dreger | services:cluster:start [2019/02/04 14:30] – [Table] dreger | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Information about the HPC-Cluster ====== | ====== Information about the HPC-Cluster ====== | ||
- | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please | + | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please |
- | **2015-06-16**: Currently there are two HPC clusters in production at the physics department. One is located in the HLRN datacenter, | + | - Your ZEDAT account username |
+ | - The group you are using the system for (e.g. ag-netz, | ||
+ | - The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/ | ||
+ | - Software that you happen to know so well that other HPC users within the department may ask you for help. | ||
+ | - A self contained job example that is typical for the workload you are using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum. | ||
+ | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | ||
+ | |||
+ | **2018-06-28**: Currently there are three HPC clusters in production at the physics department. One cluster (" | ||
+ | |||
===== Slurm documentation ===== | ===== Slurm documentation ===== | ||
Line 9: | Line 17: | ||
* [[important|Important notes]] on cluster usage | * [[important|Important notes]] on cluster usage | ||
* Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | * Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | ||
+ | * Using [[interactivesessions|interactive sessions]] with the queuing system. | ||
+ | * How to make use of the [[gpunodes|GPU-nodes]]. | ||
+ | * Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | ||
+ | * Here is a [[userlist|list of HPC users]] and the software they use | ||
===== General documentation ===== | ===== General documentation ===== | ||
* Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | * Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | ||
- | * A current python version | + | * A current python version |
+ | * Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | ||
===== Overview of available resources ===== | ===== Overview of available resources ===== | ||
- | The following table lists all HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | + | The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// |
- | + | ||
- | ^ Hosts ^ Manager | + | |
- | | @#cfc:**FB Physik - Standort Takustrasse 9** | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @#ced:**FB Physik - Standort HLRN** | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @#cef:**FB Physik - Standort ZEDAT** | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | | + | |
- | | @# | + | |
- | | @# | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
+ | ^ Hosts ^ Manager | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
- | Operating System: Debian Linux Squeeze (x64)\\ | ||
- | < | ||
- | < | ||
- | (15.03.2014) | + | (06.11.2018) |
{{: | {{: | ||
{{: | {{: |
services/cluster/start.txt · Last modified: 2024/04/26 14:33 by hoffmac00