User Tools

Site Tools


services:cluster:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
services:cluster:slurm [2014/06/29 21:07] – [Example of a very basic job script] dregerservices:cluster:slurm [2024/03/13 14:16] (current) – [Example of a very basic job script] behrmj87
Line 5: Line 5:
 ===== Quick Start for the impatient ===== ===== Quick Start for the impatient =====
  
-  - Log in to the head node sheldon-ng.physik.fu-berlin.de using ssh+  - Log in to one of the head nodes using ssh: 
 +    * login node for tron cluster: tron.physik.fu-berlin.de
   - Create a job script file to be run by the queuing system, supply information like:   - Create a job script file to be run by the queuing system, supply information like:
     * how much memory to allocate for your job     * how much memory to allocate for your job
Line 14: Line 15:
   - Submit your job script using the ''sbatch'' command   - Submit your job script using the ''sbatch'' command
  
-==== Example of a very basic job script job1.sh ====+==== Example of a very basic job script ====
  
-<xterm>+Consider the following bash script with ''#SBATCH'' comments, which tell Slurm what resources you need: 
 + 
 +<file bash example_job.sh>
 #!/bin/bash #!/bin/bash
  
Line 37: Line 40:
 # wait some time... # wait some time...
 sleep 50 sleep 50
-</xterm>+</file>
  
-Now just submit your job script using ''sbatch job1.sh'' from the command line. Please try to run jobs directly from the /scratch///username// cluster wide filesystem to lower the load on the /home serverFor testing purposes set the runtime of your job below 1 minute and submit it to the test partition by adding ''-p test'' to sbatch:+Please note that your job will be killed by the queueing system if it tries to use more memory than requested or if it runs longer than the time specified in the batch script. So to be on the safe side you can set these values a litte bit higherIf you set the values to high, your job might not start because there are not enough resources (e.g. no machine has that amount of memory you are asking for).
  
-<xterm+Now just submit your job script using ''sbatch job1.sh'' from the command line. Please try to run jobs directly from the ''/scratch'' cluster wide filesystem, where you have a directory under ''/scratch/<username>'' to lower the load on ''/home''. For testing purposes set the runtime of your job below 1 minute and submit it to the test partition by adding ''-p test'' to sbatch: 
-dreger@sheldon-ng:..dreger/quickstart> **pwd**+ 
 +<code
 +dreger@sheldon-ng:..dreger/quickstart> pwd
 /scratch/dreger/quickstart /scratch/dreger/quickstart
-dreger@sheldon-ng:..dreger/quickstart> **sbatch -p test job1.sh**+dreger@sheldon-ng:..dreger/quickstart> sbatch -p test job1.sh
 Submitted batch job 26495 Submitted batch job 26495
-dreger@sheldon-ng:..dreger/quickstart> **squeue -l -u dreger**+dreger@sheldon-ng:..dreger/quickstart> squeue -l -u dreger
 Sun Jun 29 23:02:50 2014 Sun Jun 29 23:02:50 2014
              JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)              JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)
              26495      test     job1   dreger  RUNNING       0:24      1:00      1 x001              26495      test     job1   dreger  RUNNING       0:24      1:00      1 x001
-dreger@sheldon-ng:..dreger/quickstart> **cat job1_26495.out**+dreger@sheldon-ng:..dreger/quickstart> cat job1_26495.out
 JobId=26495 Name=job1 JobId=26495 Name=job1
    UserId=dreger(4440) GroupId=fbedv(400)    UserId=dreger(4440) GroupId=fbedv(400)
Line 72: Line 77:
  
 x001 x001
-</xterm> +</code>
- +
-==== Example of a GROMACS job script for one node using multithreading ==== +
- +
-TBD +
- +
-==== Example of a GROMACS job script for multiple nodes using MPI ====+
  
-TBD 
services/cluster/slurm.1404076053.txt.gz · Last modified: 2014/06/29 21:07 by dreger

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki