services:cluster:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
services:cluster:slurm [2014/06/29 21:01] – [Quick Start for the impatient] dreger | services:cluster:slurm [2015/04/16 08:16] – dreger | ||
---|---|---|---|
Line 14: | Line 14: | ||
- Submit your job script using the '' | - Submit your job script using the '' | ||
- | Consider the following example | + | ==== Example |
+ | |||
+ | Consider the following bash script with #SBATCH comments, which tell Slurm what resources you need: | ||
< | < | ||
Line 38: | Line 40: | ||
sleep 50 | sleep 50 | ||
</ | </ | ||
+ | |||
+ | Please note that your job will be killed by the queueing system if it tries to allcoate more memory than requested or if it runs longer than the time specified in the batch script. So to be on the safe side you can set these values a litte bit higher. If you set the values to high, your job might not start because there are not enough resources (e.g. no machine has that amount of memory you are asking for). | ||
Now just submit your job script using '' | Now just submit your job script using '' | ||
Line 45: | Line 49: | ||
/ | / | ||
dreger@sheldon-ng: | dreger@sheldon-ng: | ||
- | Submitted batch job 26494 | + | Submitted batch job 26495 |
- | dreger@sheldon-ng: | + | dreger@sheldon-ng: |
- | JobId=26494 Name=job1 | + | Sun Jun 29 23:02:50 2014 |
+ | JOBID PARTITION | ||
+ | | ||
+ | dreger@sheldon-ng: | ||
+ | JobId=26495 Name=job1 | ||
| | ||
| | ||
| | ||
| | ||
- | | + | |
- | | + | |
- | | + | |
| | ||
- | | + | |
| | ||
| | ||
Line 69: | Line 77: | ||
x001 | x001 | ||
</ | </ | ||
+ | |||
+ | ==== Example of a GROMACS job script for one node using multithreading ==== | ||
+ | |||
+ | TBD | ||
+ | |||
+ | ==== Example of a GROMACS job script for multiple nodes using MPI ==== | ||
+ | |||
+ | TBD |
services/cluster/slurm.txt · Last modified: 2015/06/16 12:53 by dreger