Use /tmp for I/O intensive single node jobs

Jobs that do a lot of I/O operations on a shared cluster filesystem like /scratch can severely slow down the whole system. If your job does not use multiple nodes and is not reading and writing very large files, it might be a good idea to move input and output files to the /tmp folder on the compute node itself. /tmp is a RAM based filesystem, meaning that anything you store there is actually stored in memory. So space is quite limited. Currently you can use at most 24GB of /tmp space. Since /tmp uses RAM, usage also counts towards the allocated memory limit (e.g. –mem or –mem-per-cpu settings). So the memory that the process uses during calculation plus the amount of data stored in files below /tmp can not exceed the total amount of RAM installed in the compute node. Let's look at some examples…

Please note: usable space below /tmp is limited by the --mem option

<xterm>

</xterm>