services:cluster:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
services:cluster:start [2017/07/10 18:20] – [Table] dreger | services:cluster:start [2019/11/04 13:51] – [Table] dreger | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Information about the HPC-Cluster ====== | ====== Information about the HPC-Cluster ====== | ||
- | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please | + | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please |
- | **2015-06-16**: Currently there are two HPC clusters in production at the physics department. One is located in the HLRN datacenter, | + | - Your ZEDAT account username |
+ | - The group you are using the system for (e.g. ag-netz, | ||
+ | - The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/ | ||
+ | - Software that you happen to know so well that other HPC users within the department may ask you for help. | ||
+ | - A self contained job example that is typical for the workload you are using the HPC systems for, ideally with a small README describing how to run it and a job script. If possible scale it so it runs between a few minutes and an hour at maximum. | ||
+ | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | ||
+ | |||
+ | **2018-06-28**: Currently there are three HPC clusters in production at the physics department. One cluster (" | ||
+ | |||
===== Slurm documentation ===== | ===== Slurm documentation ===== | ||
Line 11: | Line 19: | ||
* Using [[interactivesessions|interactive sessions]] with the queuing system. | * Using [[interactivesessions|interactive sessions]] with the queuing system. | ||
* How to make use of the [[gpunodes|GPU-nodes]]. | * How to make use of the [[gpunodes|GPU-nodes]]. | ||
+ | * Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | ||
+ | * Here is a [[userlist|list of HPC users]] and the software they use | ||
===== General documentation ===== | ===== General documentation ===== | ||
Line 22: | Line 32: | ||
The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | ||
- | ^ Hosts | + | ^ Hosts ^ Manager |
- | | @# | + | | @# |
- | | @#cfc:n010-n041 | offline | + | | @#cfc:z001-z020 |
- | | @#cfc:n110-n111 | offline | + | | @#cfc:z021-z040 |
- | | @#cfc:n112-n127 | offline | + | | @#cfc:z041-z113 |
- | | @# | + | | @#cfc:z163-z166 |
- | | @# | + | | @# |
- | | @#cfc:n176-n183 | offline | + | | | | |
- | | @# | + | | @# |
- | | | + | | @# |
- | | @# | + | | @# |
- | | @# | + | | | | |
- | | @# | + | | @# |
- | | @# | + | | @# |
- | | | + | | @# |
- | | @# | + | | @# |
- | | @# | + | | | | |
- | | @# | + | | @# |
- | | @# | + | | @# |
- | | | + | | @# |
- | | @# | + | | @# |
- | | @# | + | | @# |
- | | @# | + | | @# |
- | | @# | + | | | | |
- | | @# | + | | @# |
- | | @# | + | |
- | | | + | |
- | | @# | + | |
- | | @# | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | Operating System: Debian Linux Squeeze (x64)\\ | ||
- | < | ||
- | < | ||
- | (15.03.2014) | + | (06.11.2018) |
{{: | {{: | ||
{{: | {{: |
services/cluster/start.txt · Last modified: 2024/09/12 07:50 by behrmj87