New cluster » History » Version 7
Martin Kuemmel, 05/11/2022 09:52 AM
1 | 2 | Martin Kuemmel | h1. New computing cluster in Koenigstrasse |
---|---|---|---|
2 | 1 | Martin Kuemmel | |
3 | 1 | Martin Kuemmel | h2. Introduction |
4 | 1 | Martin Kuemmel | |
5 | 1 | Martin Kuemmel | Since January 2022 we have a new computing cluster which is installed int he server room of the physiscs department at Koenigstrasse. Temporarily attached to the cluster is a 10TB disk for processing. We are currently (17th March 2022) waiting for a large amount of storage (40TB) which will then replace this temporary solution. |
6 | 1 | Martin Kuemmel | |
7 | 3 | Martin Kuemmel | h2. Hardware |
8 | 3 | Martin Kuemmel | |
9 | 3 | Martin Kuemmel | * there are in total 8 compute nodes avalable; |
10 | 3 | Martin Kuemmel | * the compute nodes are named "usm-cl-bt01n[1-4]" and "usm-cl-bt02n[1-4]"; |
11 | 3 | Martin Kuemmel | * each node has 128 cores; |
12 | 3 | Martin Kuemmel | * each node has 500Gb available; |
13 | 3 | Martin Kuemmel | |
14 | 1 | Martin Kuemmel | h2. Login |
15 | 1 | Martin Kuemmel | |
16 | 1 | Martin Kuemmel | * public login server: login.physik.uni-muenchen.de; |
17 | 1 | Martin Kuemmel | * Jupyterhub: https://workshop.physik.uni-muenchen.de; |
18 | 1 | Martin Kuemmel | * both the server and the Jupyterhub require a two-factor-authentication with your physics account pwd as the first authentication. Then you can use a smartphone app like Google Authenticator (or any other app that generates time-based one-time-passwords). The app needs to be registered here: https://otp.physik.uni-muenchen.de, it is there called a soft-token. |
19 | 1 | Martin Kuemmel | |
20 | 1 | Martin Kuemmel | Graphical access to the cluster or its login nodes is possible, and I am currently trying to figure out the most efficient way for me. |
21 | 1 | Martin Kuemmel | |
22 | 1 | Martin Kuemmel | h2. Processing |
23 | 1 | Martin Kuemmel | |
24 | 1 | Martin Kuemmel | * as on our local cluster "slurm" is being used as the job scheduling system. Access to the computing nodes and running jobs requires starting a corresponding slurm job; |
25 | 1 | Martin Kuemmel | * the partition of our cluster is "usm-cl"; |
26 | 1 | Martin Kuemmel | * from the login node you can start an interactive job via "intjob --partition=usm-cl" (additional slurm arguments are accepted as well); |
27 | 7 | Martin Kuemmel | * I created a "python script":https://cosmofs3.kosmo.physik.uni-muenchen.de/attachments/download/285/scontrol.py which provides information on our partition; |
28 | 5 | Martin Kuemmel | * I have also put together a rather silly "slurm script":https://cosmofs3.kosmo.physik.uni-muenchen.de/attachments/download/283/test.slurm |
29 | 1 | Martin Kuemmel | |
30 | 1 | Martin Kuemmel | h2. Disk space |
31 | 1 | Martin Kuemmel | |
32 | 1 | Martin Kuemmel | * users can create their own disk space under "/project/ls-mohr/users/" such as "/project/ls-mohr/users/martin.kuemmel"; |
33 | 1 | Martin Kuemmel | |
34 | 1 | Martin Kuemmel | h2. Installed software |
35 | 1 | Martin Kuemmel | |
36 | 1 | Martin Kuemmel | We use a package manager called spack to download and install software that is not directly available from the linux distribution. To see what is already installed, do the following on a computing node: |
37 | 1 | Martin Kuemmel | |
38 | 1 | Martin Kuemmel | * "module load spack" |
39 | 1 | Martin Kuemmel | * "module avail" |
40 | 1 | Martin Kuemmel | |
41 | 1 | Martin Kuemmel | Adding more software is not a problem. |