services:cluster:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
services:cluster:start [2018/06/11 21:26] – [Table] dreger | services:cluster:start [2022/07/12 15:49] (current) – [Slurm documentation] zedv | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Information about the HPC-Cluster ====== | ====== Information about the HPC-Cluster ====== | ||
- | In order to get access to the department of physics HPC resources | + | <note tip> |
- | **2015-06-16**: Currently there are two HPC clusters in production at the physics department. One is located | + | ===== Access to the Cluster ===== |
+ | |||
+ | In order to get access to the department of physics HPC resources you need to send an email to hpc@physik.fu-berlin.de. Please supply the following information: | ||
+ | |||
+ | - Your ZEDAT account username | ||
+ | - The group you are using the system for (e.g. ag-netz, | ||
+ | - The software you are using for your simulations (e.g. gromacs, gaussian, self-written code in language XYZ, ...) and whether you use MPI or OpenCL/ | ||
+ | - Software that you happen to know so well that other HPC users within the department may ask you for help. | ||
+ | - A self-contained example job that is typical for the workload you will be using the HPC systems | ||
+ | - If you are no longer a member of the physics department, we would like to get an estimate on how much longer you will need access to the systems (e.g. to finish some paper) | ||
===== Slurm documentation ===== | ===== Slurm documentation ===== | ||
Line 13: | Line 22: | ||
* Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | * Here is a [[nodes|list of special nodes]] that are currently not part of slurm. | ||
* Here is a [[userlist|list of HPC users]] and the software they use | * Here is a [[userlist|list of HPC users]] and the software they use | ||
+ | * Using [[sheldon-gpu|GPU nodes on sheldon]] | ||
===== General documentation ===== | ===== General documentation ===== | ||
* Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | * Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | ||
- | * A current | + | * A more current |
* Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | * Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] | ||
Line 24: | Line 34: | ||
The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | The following table lists some HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http:// | ||
- | ^ Hosts ^ Manager | + | The name of the login node for each of our clusters has the same name as the cluster, |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | @# | + | |
- | | | + | |
- | | @# | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
+ | ^ Hosts ^ Manager | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | @# | ||
+ | | | | | ||
+ | | @# | ||
- | Operating System: Debian Linux Squeeze (x64)\\ | ||
- | < | ||
- | < | ||
- | (15.03.2014) | + | (06.11.2018) |
{{: | {{: | ||
{{: | {{: |
services/cluster/start.1528752361.txt.gz · Last modified: 2018/06/11 21:26 by dreger