Both sides previous revisionPrevious revisionNext revision | Previous revision |
services:cluster:start [2015/06/16 12:06] – [Information about the HPC-Cluster] dreger | services:cluster:start [2016/10/21 09:17] (current) – [General documentation] tomannecke |
---|
* [[important|Important notes]] on cluster usage | * [[important|Important notes]] on cluster usage |
* Start with the [[slurm|Introduction to the Slurm HPC cluster]]. | * Start with the [[slurm|Introduction to the Slurm HPC cluster]]. |
| * Using [[interactivesessions|interactive sessions]] with the queuing system. |
| * How to make use of the [[gpunodes|GPU-nodes]]. |
| |
===== General documentation ===== | ===== General documentation ===== |
| |
* Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. | * Robert Hübener from AG-Eisert has written a HOWTO for using [[mmacluster|Mathematica on a HPC-Cluster]]. |
* A current python version as been built for cluster usage. The [[pythoncluster|Python on the HPC-Cluster]] tutorial describes how to set it up. | * A current python version has been built for cluster usage. The [[pythoncluster|Python on the HPC-Cluster]] tutorial describes how to set it up. |
| * Try to [[usetmpforio|use /tmp for I/O intensive single node jobs]] |
| |
===== Overview of available resources ===== | ===== Overview of available resources ===== |
| |
The following table lists all HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http://www.zedat.fu-berlin.de/HPC/Soroban|soroban]] cluster. The torque system will soon be replaced by SLURM. All torque nodes are running Debian/Squeeze, while all SLURM nodes are running Debian/Wheezy. | The following table lists all HPC resources available at the physics department. At the end of the table we also list the resources for the ZEDAT [[http://www.zedat.fu-berlin.de/HPC/Soroban|soroban]] cluster. The tron cluster at Takustrasse 9 is currently restructured and will most likely start running Debian/Jessie. |
| |
^ Hosts ^ Manager ^ Nodes ^ Form ^ Hardware ^ CPU ^ Speed ^ Core/Node ^ RAM/Core ^ RAM/Node ^ #RAM ^ #Cores ^ | ^ Hosts ^ Manager ^ Nodes ^ Form ^ Hardware ^ CPU ^ Speed ^ Core/Node ^ RAM/Core ^ RAM/Node ^ #RAM ^ #Cores ^ |
| @#cfc:**FB Physik - Standort Takustrasse 9** |||||||||||| | | @#cfc:**tron cluster** - FB Physik - Location: Takustrasse 9 |||||||||||| |
| @#cfc:n010-n041 | offline | 32 | 2U Twin<sup>2</sup> | Dell C6100 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 96G | 3072G | 384 | | | @#cfc:n010-n041 | offline | 32 | 2U Twin<sup>2</sup> | Dell C6100 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 96G | 3072G | 384 | |
| @#cfc:n110-n111 | offline | 2 | 2U | Dell C6145 | 4x Opteron 6128HE | 2.0GHz | 32 | 4G | 128G | 256G | 64 | | | @#cfc:n110-n111 | offline | 2 | 2U | Dell C6145 | 4x Opteron 6128HE | 2.0GHz | 32 | 4G | 128G | 256G | 64 | |
| @#cfc:**#Taku9** | | **~~=sum(range(col(),1,col(),row()-1))~~** | | | | | | | | **~~=sum(range(col(),1,col(),row()-1))~~G** | **~~=sum(range(col(),1,col(),row()-1))~~** | | | @#cfc:**#Taku9** | | **~~=sum(range(col(),1,col(),row()-1))~~** | | | | | | | | **~~=sum(range(col(),1,col(),row()-1))~~G** | **~~=sum(range(col(),1,col(),row()-1))~~** | |
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
| @#ced:**FB Physik - Standort HLRN** |||||||||||| | | @#ced:**sheldon cluster** - FB Physik - Location: HLRN |||||||||||| |
| @#ced:x001-x192 | SLURM<sup>1)</sup> | 192 | Blade | SGI Altix ICE 8200 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 9216G | 1536 | | | @#ced:x001-x192 | SLURM<sup>1)</sup> | 192 | Blade | SGI Altix ICE 8200 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 9216G | 1536 | |
| @#ced:uv1000 | none | 1 | 42U | SGI UV 1000 | 64x Xeon X7560 | 2.26Ghz | 512 | 4G | 2T | 2048G | 512 | | | @#ced:uv1000 | none | 1 | 42U | SGI UV 1000 | 64x Xeon X7560 | 2.26Ghz | 512 | 4G | 2T | 2048G | 512 | |
| @#ced:**#HLRN** | | **193** | | | | | | | | **11264G** | **2048** | | | @#ced:**#HLRN** | | **193** | | | | | | | | **11264G** | **2048** | |
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
| @#cef:**FB Physik - Standort ZEDAT** |||||||||||| | | @#cef:**yoshi cluster** - FB Physik - Location: ZEDAT |||||||||||| |
| @#cef:y001-y128 | SLURM<sup>1)</sup> | 128 | Blade | HP BL460c G6 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 6144G | 1024 | | | @#cef:y001-y128 | SLURM<sup>1)</sup> | 128 | Blade | HP BL460c G6 | 2x Xeon X5570 | 2.93GHz | 8 | 6G | 48G | 6144G | 1024 | |
| @#cef:ygpu01-ygpu31 | SLURM<sup>2)</sup> | 31 | 2U GPU Nodes (2x Nvidia Tesla M2070) | IBM iDataPlex dx360 M3 | 2x Xeon X5570 | 2.93GHz | 8 | 3G | 24G | 744G | 248 | | | @#cef:ygpu01-ygpu31 | SLURM<sup>2)</sup> | 31 | 2U GPU Nodes (2x Nvidia Tesla M2070) | IBM iDataPlex dx360 M3 | 2x Xeon X5570 | 2.93GHz | 8 | 3G | 24G | 744G | 248 | |
| @#cef:**#Ph-ZEDAT** | | **159** | | | | | | | | **6888G** | **1272** | | | @#cef:**#Ph-ZEDAT** | | **159** | | | | | | | | **6888G** | **1272** | |
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
| @#ccf:**ZEDAT-HPC** |||||||||||| | | @#ccf:**soroban cluster** - ZEDAT-HPC - Location: ZEDAT |||||||||||| |
| @#ccf:node001-002 | SLURM | 2 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 48G | 96G | 24 | | | @#ccf:node001-002 | SLURM | 2 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 8G | 48G | 96G | 24 | |
| @#ccf:node003-030 | SLURM | 28 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 4G | 24G | 672G | 336 | | | @#ccf:node003-030 | SLURM | 28 | 1U Twin | Asus Z8NH-D12 | 2x Xeon X5650 | 2.66GHz | 12 | 4G | 24G | 672G | 336 | |
| @#ccc:**ausgemusterte Systeme** |||||||||||| | | @#ccc:**ausgemusterte Systeme** |||||||||||| |
| @#ccc:Abacus4 | | 8 | | IBM p575 | 16x POWER 5+ | 1.9Ghz | 32 | 4G | 128G | 1024G | 256 | | | @#ccc:Abacus4 | | 8 | | IBM p575 | 16x POWER 5+ | 1.9Ghz | 32 | 4G | 128G | 1024G | 256 | |
| |
| |
| |