services:cluster:slurm
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
services:cluster:slurm [2014/06/29 20:38] – created dreger | services:cluster:slurm [2015/06/16 12:53] (current) – [Quick Start for the impatient] dreger | ||
---|---|---|---|
Line 5: | Line 5: | ||
===== Quick Start for the impatient ===== | ===== Quick Start for the impatient ===== | ||
- | - Log in to the head node sheldon-ng.physik.fu-berlin.de | + | - Log in to one of the head nodes using ssh: |
+ | * login node for sheldon | ||
+ | * login node for yoshi cluster: yoshi.physik.fu-berlin.de | ||
+ | * login node for tron cluster: tron.physik.fu-berlin.de (currently offline) | ||
- Create a job script file to be run by the queuing system, supply information like: | - Create a job script file to be run by the queuing system, supply information like: | ||
* how much memory to allocate for your job | * how much memory to allocate for your job | ||
Line 14: | Line 17: | ||
- Submit your job script using the '' | - Submit your job script using the '' | ||
- | Consider the following example | + | ==== Example |
+ | |||
+ | Consider the following bash script with #SBATCH comments, which tell Slurm what resources you need: | ||
< | < | ||
Line 34: | Line 39: | ||
# run your program... | # run your program... | ||
hostname | hostname | ||
+ | |||
+ | # wait some time... | ||
+ | sleep 50 | ||
</ | </ | ||
+ | |||
+ | Please note that your job will be killed by the queueing system if it tries to use more memory than requested or if it runs longer than the time specified in the batch script. So to be on the safe side you can set these values a litte bit higher. If you set the values to high, your job might not start because there are not enough resources (e.g. no machine has that amount of memory you are asking for). | ||
Now just submit your job script using '' | Now just submit your job script using '' | ||
Line 42: | Line 52: | ||
/ | / | ||
dreger@sheldon-ng: | dreger@sheldon-ng: | ||
- | Submitted batch job 26494 | + | Submitted batch job 26495 |
- | dreger@sheldon-ng: | + | dreger@sheldon-ng: |
- | JobId=26494 Name=job1 | + | Sun Jun 29 23:02:50 2014 |
+ | JOBID PARTITION | ||
+ | | ||
+ | dreger@sheldon-ng: | ||
+ | JobId=26495 Name=job1 | ||
| | ||
| | ||
| | ||
| | ||
- | | + | |
- | | + | |
- | | + | |
| | ||
- | | + | |
| | ||
| | ||
Line 66: | Line 80: | ||
x001 | x001 | ||
</ | </ | ||
+ | |||
+ | ==== Example of a GROMACS job script for one node using multithreading ==== | ||
+ | |||
+ | TBD | ||
+ | |||
+ | ==== Example of a GROMACS job script for multiple nodes using MPI ==== | ||
+ | |||
+ | TBD |
services/cluster/slurm.1404074327.txt.gz · Last modified: 2014/06/29 20:38 by dreger