User Tools

Site Tools


services:cluster:queuing-system

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
cluster:queuing-system [2012/03/02 19:23] – [The job-script] dregerservices:cluster:queuing-system [2012/10/18 17:26] pneuser
Line 1: Line 1:
-====== Getting started with the HPC cluster ======+====== Introduction to the HPC cluster of the physics department ======
  
 The login node of the HPC cluster is ''sheldon.physik.fu-berlin.de''. You can connect to it from anywhere using ssh, e.g. by issuing ''ssh sheldon.physik.fu-berlin.de'' on the command line or using putty from windows. The login node of the HPC cluster is ''sheldon.physik.fu-berlin.de''. You can connect to it from anywhere using ssh, e.g. by issuing ''ssh sheldon.physik.fu-berlin.de'' on the command line or using putty from windows.
Line 101: Line 101:
 | walltime | seconds, or [[HH:]MM:]SS | Maximum amount of real time during which the job can be in the running state. The job will be terminated once this limit is reached. | **walltime=100:00:00** -> request 100 hours for this job | walltime=1:00:00 (1 hour) | | walltime | seconds, or [[HH:]MM:]SS | Maximum amount of real time during which the job can be in the running state. The job will be terminated once this limit is reached. | **walltime=100:00:00** -> request 100 hours for this job | walltime=1:00:00 (1 hour) |
 | pmem | size* | Maximum amount of physical memory used by any single process of the job. In our case this means per core. | **pmem=8gb** -> request 8gb RAM per core | pmem=2gb | | pmem | size* | Maximum amount of physical memory used by any single process of the job. In our case this means per core. | **pmem=8gb** -> request 8gb RAM per core | pmem=2gb |
-| file | size* | The amount of total **local disk space** requested for the job. The space can be accessed at /local_scratch/$PBS_JOBID | **file=10gb** -> request 10 gigabytes of local disk space on each compute node | none |+| file | size* | The amount of **local disk space per core per node** requested for the job. The space can be accessed at /local_scratch/$PBS_JOBID | **file=10gb** -> request 10 gigabytes of local disk space on each compute node for each core used | none |
  
 **size* format** = integer, optionally followed by a multiplier {b,kb,mb,gb,tb} meaning {bytes,kilobytes,megabytes,gigabytes,terabytes}. no suffix means bytes. **size* format** = integer, optionally followed by a multiplier {b,kb,mb,gb,tb} meaning {bytes,kilobytes,megabytes,gigabytes,terabytes}. no suffix means bytes.
Line 111: Line 111:
 Please try to use local disk space on the compute nodes whenever possible. Since access to local storage is faster than access to your $PBS_O_WORKDIR, this will most likely speed up your compute jobs. At the same time it reduces the load on the central home-server. However, do not forget to copy back data from the compute nodes to $PBS_O_WORKDIR after the job has finised, since the local disk space will be cleared once your job-script ha finished. The following advanced job-script is using local disk space. Please try to use local disk space on the compute nodes whenever possible. Since access to local storage is faster than access to your $PBS_O_WORKDIR, this will most likely speed up your compute jobs. At the same time it reduces the load on the central home-server. However, do not forget to copy back data from the compute nodes to $PBS_O_WORKDIR after the job has finised, since the local disk space will be cleared once your job-script ha finished. The following advanced job-script is using local disk space.
  
-=== Complicated job-script example running CP2K using MPI on 12 nodes with 8 cores each ===+=== Advanced job-script example running CP2K using MPI on 12 nodes with 8 cores each ===
  
 <code> <code>
services/cluster/queuing-system.txt · Last modified: 2014/06/11 14:40 by dreger

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki