New cluster » History » Version 3
Version 2 (Martin Kuemmel, 03/17/2022 01:32 PM) → Version 3/23 (Martin Kuemmel, 03/17/2022 01:56 PM)
h1. New computing cluster in Koenigstrasse
h2. Introduction
Since January 2022 we have a new computing cluster which is installed int he server room of the physiscs department at Koenigstrasse. Temporarily attached to the cluster is a 10TB disk for processing. We are currently (17th March 2022) waiting for a large amount of storage (40TB) which will then replace this temporary solution.
h2. Hardware
* there are in total 8 compute nodes avalable;
* the compute nodes are named "usm-cl-bt01n[1-4]" and "usm-cl-bt02n[1-4]";
* each node has 128 cores;
* each node has 500Gb available;
h2. Login
* public login server: login.physik.uni-muenchen.de;
* Jupyterhub: https://workshop.physik.uni-muenchen.de;
* both the server and the Jupyterhub require a two-factor-authentication with your physics account pwd as the first authentication. Then you can use a smartphone app like Google Authenticator (or any other app that generates time-based one-time-passwords). The app needs to be registered here: https://otp.physik.uni-muenchen.de, it is there called a soft-token.
Graphical access to the cluster or its login nodes is possible, and I am currently trying to figure out the most efficient way for me.
h2. Processing
* as on our local cluster "slurm" is being used as the job scheduling system. Access to the computing nodes and running jobs requires starting a corresponding slurm job;
* the partition of our cluster is "usm-cl";
* from the login node you can start an interactive job via "intjob --partition=usm-cl" (additional slurm arguments are accepted as well);
h2. Disk space
* users can create their own disk space under "/project/ls-mohr/users/" such as "/project/ls-mohr/users/martin.kuemmel";
h2. Installed software
We use a package manager called spack to download and install software that is not directly available from the linux distribution. To see what is already installed, do the following on a computing node:
* "module load spack"
* "module avail"
Adding more software is not a problem.
h2. Introduction
Since January 2022 we have a new computing cluster which is installed int he server room of the physiscs department at Koenigstrasse. Temporarily attached to the cluster is a 10TB disk for processing. We are currently (17th March 2022) waiting for a large amount of storage (40TB) which will then replace this temporary solution.
h2. Hardware
* there are in total 8 compute nodes avalable;
* the compute nodes are named "usm-cl-bt01n[1-4]" and "usm-cl-bt02n[1-4]";
* each node has 128 cores;
* each node has 500Gb available;
h2. Login
* public login server: login.physik.uni-muenchen.de;
* Jupyterhub: https://workshop.physik.uni-muenchen.de;
* both the server and the Jupyterhub require a two-factor-authentication with your physics account pwd as the first authentication. Then you can use a smartphone app like Google Authenticator (or any other app that generates time-based one-time-passwords). The app needs to be registered here: https://otp.physik.uni-muenchen.de, it is there called a soft-token.
Graphical access to the cluster or its login nodes is possible, and I am currently trying to figure out the most efficient way for me.
h2. Processing
* as on our local cluster "slurm" is being used as the job scheduling system. Access to the computing nodes and running jobs requires starting a corresponding slurm job;
* the partition of our cluster is "usm-cl";
* from the login node you can start an interactive job via "intjob --partition=usm-cl" (additional slurm arguments are accepted as well);
h2. Disk space
* users can create their own disk space under "/project/ls-mohr/users/" such as "/project/ls-mohr/users/martin.kuemmel";
h2. Installed software
We use a package manager called spack to download and install software that is not directly available from the linux distribution. To see what is already installed, do the following on a computing node:
* "module load spack"
* "module avail"
Adding more software is not a problem.