<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://wiki.physik.fu-berlin.de/it/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://wiki.physik.fu-berlin.de/it/feed.php">
        <title>DokuWiki - services:cluster</title>
        <description></description>
        <link>http://wiki.physik.fu-berlin.de/it/</link>
        <image rdf:resource="http://wiki.physik.fu-berlin.de/it/_media/wiki:dokuwiki-128.png" />
       <dc:date>2026-04-30T05:59:55+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:gpunodes?rev=1714142266&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:important?rev=1738946613&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:interactivesessions?rev=1434465331&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:localstorage?rev=1383906387&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:mmacluster?rev=1738849088&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:nodes?rev=1738849267&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:pythoncluster?rev=1714142362&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:queuing-system?rev=1402497634&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:slurm?rev=1762957028&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:start?rev=1771513250&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:usetmpforio?rev=1714141175&amp;do=diff"/>
                <rdf:li rdf:resource="http://wiki.physik.fu-berlin.de/it/services:cluster:uv1000?rev=1426861216&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://wiki.physik.fu-berlin.de/it/_media/wiki:dokuwiki-128.png">
        <title>DokuWiki</title>
        <link>http://wiki.physik.fu-berlin.de/it/</link>
        <url>http://wiki.physik.fu-berlin.de/it/_media/wiki:dokuwiki-128.png</url>
    </image>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:gpunodes?rev=1714142266&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-04-26T14:37:46+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>gpunodes</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:gpunodes?rev=1714142266&amp;do=diff</link>
        <description>Introduction to GPU accelerated jobs

Currently we have 31 nodes in the yoshi cluster (ygpu01-ygpu31) equipped with GPU boards. The exact hardware config is:

	*  2x NVidia [Tesla M2070] 
	*  2x Xeon X5570
	*  24GB RAM
	*  QDR Infiniband between all GPU nodes

In oder to use the GPU cards, you need to allocate them through the queuing system using the</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:important?rev=1738946613&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-02-07T16:43:33+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>important</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:important?rev=1738946613&amp;do=diff</link>
        <description>Important notes on cluster usage

&#039;&#039;/home&#039;&#039; on the cluster

The /home directories on the cluster are separate for each cluster and separate from our regular home directories, so you will need to copy over config you may need, such as SSH keys.

Submit jobs from &#039;&#039;/scratch/username&#039;&#039;</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:interactivesessions?rev=1434465331&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-06-16T14:35:31+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>interactivesessions</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:interactivesessions?rev=1434465331&amp;do=diff</link>
        <description>Interactive sessions with Slurm

You don&#039;t have to run your jobs non-interactively using sbatch. It&#039;s also possible to open an interactive shell through the queuing system just like when using ssh to a node. You may only use ssh to log into a node to check if your sbatch job is running fine (using top for example). You may not start calculations this way, since this bypasses the queuing system and will take resources assigned to other users. When starting an interactive shell through the queuing…</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:localstorage?rev=1383906387&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-11-08T10:26:27+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>localstorage</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:localstorage?rev=1383906387&amp;do=diff</link>
        <description>Use local storage on the compute nodes

If many jobs write to or read from the NFS server for the cluster home at the same time, the server can get very slow and even crash. Therefore it&#039;s very important that all users try to use local storage available to the nodes if possible. In most cases this will also speed up your jobs. In order to do so you have to tell the queuing system the amount of local disk space you want to reserve for your job. The queuing system will create a directory named</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:mmacluster?rev=1738849088&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-02-06T13:38:08+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>mmacluster</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:mmacluster?rev=1738849088&amp;do=diff</link>
        <description>Mathematica on a cluster HOWTO

Basics

On the cluster, Mathematica can be started without GUI from the command line (in interactive mode and from batch files). In this work mode, it processes so called m-files (they have a suffix .m). These m-files are normal Mathematica notebooks which have been &#039;saved as&#039; in m-file form, which can be chosen in the menu.</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:nodes?rev=1738849267&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-02-06T13:41:07+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>nodes</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:nodes?rev=1738849267&amp;do=diff</link>
        <description>List of special HPC nodes
&lt;hpc@physik.fu-berlin.de&gt;
The following HPC nodes are currently not managed by Slurm but are rather directly assigned to users for a period of time to solve specific problems.
 Hostname   Type             RAM     CPU        Cores   HT    Disk   OS</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:pythoncluster?rev=1714142362&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-04-26T14:39:22+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>pythoncluster</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:pythoncluster?rev=1714142362&amp;do=diff</link>
        <description>Python on the Cluster

This document describes how to use python for numerical and/or batch jobs on the cluster with a heavy focus on using IPython for management of the multiprocessing. Other solutions are possible, but not the focus of this introductory document. Furthermore, this document focuses on how to setup a parallel IPython environment, but will not discuss its usage.</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:queuing-system?rev=1402497634&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2014-06-11T14:40:34+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>queuing-system</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:queuing-system?rev=1402497634&amp;do=diff</link>
        <description>Introduction to the Torque HPC cluster of the physics department

The login node of the HPC cluster is sheldon.physik.fu-berlin.de. You can connect to it from anywhere using ssh, e.g. by issuing ssh sheldon.physik.fu-berlin.de on the command line or using putty from windows.$PBS_O_WORKDIR, this will most likely speed up your compute jobs. At the same time it reduces the load on the central home-server. However, do not forget to copy back data from the compute nodes to $</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:slurm?rev=1762957028&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-11-12T14:17:08+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>slurm</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:slurm?rev=1762957028&amp;do=diff</link>
        <description>Introduction to the Slurm HPC cluster

The primary source for documentation on Slurm usage and commands can be found at the Slurm site. Please also consult the man pages on Slurm command, e.g. typing man sbatch will give you extensive information on the sbatch</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:start?rev=1771513250&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-02-19T15:00:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>start</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:start?rev=1771513250&amp;do=diff</link>
        <description>Information about the HPC-Cluster
Matrix in #hpc:physik.fu-berlin.de
Access to the Cluster

In order to get access to the department of physics HPC resources you need to send an email to &lt;hpc@physik.fu-berlin.de&gt;. Please supply the following information:

	*  Your ZEDAT account username
	*  The group you are using the system for (e.g. AG Netz, AG Eisert, AG Franke…)</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:usetmpforio?rev=1714141175&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-04-26T14:19:35+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>usetmpforio</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:usetmpforio?rev=1714141175&amp;do=diff</link>
        <description>Use /tmp for I/O intensive single node jobs

Jobs that do a lot of I/O operations on a shared cluster filesystem like /scratch can severely slow down the whole system. If your job does not use multiple nodes and is not reading and writing very large files, it might be a good idea to move input and output files to the</description>
    </item>
    <item rdf:about="http://wiki.physik.fu-berlin.de/it/services:cluster:uv1000?rev=1426861216&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-03-20T14:20:16+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>uv1000</title>
        <link>http://wiki.physik.fu-berlin.de/it/services:cluster:uv1000?rev=1426861216&amp;do=diff</link>
        <description>SGI Altix UV 1000

The UV 1000 is a large shared memory machine. Our single machine is still housed in the HLRN where it was used from 2011 to 2013 by the previous owners.

Running Shared Memory Applications on the UV

Although the UV can be used as any other multi core machine to run shared memory programs it does not behave in the same way as your standard office computer. This is a result of the much larger scale and fundamentally different architecture to reach these proportions.</description>
    </item>
</rdf:RDF>
