New cluster » History » Version 2
Martin Kuemmel, 03/17/2022 01:32 PM
1 | 2 | Martin Kuemmel | h1. New computing cluster in Koenigstrasse |
---|---|---|---|
2 | 1 | Martin Kuemmel | |
3 | 1 | Martin Kuemmel | h2. Introduction |
4 | 1 | Martin Kuemmel | |
5 | 1 | Martin Kuemmel | Since January 2022 we have a new computing cluster which is installed int he server room of the physiscs department at Koenigstrasse. Temporarily attached to the cluster is a 10TB disk for processing. We are currently (17th March 2022) waiting for a large amount of storage (40TB) which will then replace this temporary solution. |
6 | 1 | Martin Kuemmel | |
7 | 1 | Martin Kuemmel | h2. Login |
8 | 1 | Martin Kuemmel | |
9 | 1 | Martin Kuemmel | * public login server: login.physik.uni-muenchen.de; |
10 | 1 | Martin Kuemmel | * Jupyterhub: https://workshop.physik.uni-muenchen.de; |
11 | 1 | Martin Kuemmel | * both the server and the Jupyterhub require a two-factor-authentication with your physics account pwd as the first authentication. Then you can use a smartphone app like Google Authenticator (or any other app that generates time-based one-time-passwords). The app needs to be registered here: https://otp.physik.uni-muenchen.de, it is there called a soft-token. |
12 | 1 | Martin Kuemmel | |
13 | 1 | Martin Kuemmel | Graphical access to the cluster or its login nodes is possible, and I am currently trying to figure out the most efficient way for me. |
14 | 1 | Martin Kuemmel | |
15 | 1 | Martin Kuemmel | h2. Processing |
16 | 1 | Martin Kuemmel | |
17 | 1 | Martin Kuemmel | * as on our local cluster "slurm" is being used as the job scheduling system. Access to the computing nodes and running jobs requires starting a corresponding slurm job; |
18 | 1 | Martin Kuemmel | * the partition of our cluster is "usm-cl"; |
19 | 1 | Martin Kuemmel | * from the login node you can start an interactive job via "intjob --partition=usm-cl" (additional slurm arguments are accepted as well); |
20 | 1 | Martin Kuemmel | |
21 | 1 | Martin Kuemmel | |
22 | 1 | Martin Kuemmel | h2. Disk space |
23 | 1 | Martin Kuemmel | |
24 | 1 | Martin Kuemmel | * users can create their own disk space under "/project/ls-mohr/users/" such as "/project/ls-mohr/users/martin.kuemmel"; |
25 | 1 | Martin Kuemmel | |
26 | 1 | Martin Kuemmel | h2. Installed software |
27 | 1 | Martin Kuemmel | |
28 | 1 | Martin Kuemmel | We use a package manager called spack to download and install software that is not directly available from the linux distribution. To see what is already installed, do the following on a computing node: |
29 | 1 | Martin Kuemmel | |
30 | 1 | Martin Kuemmel | * "module load spack" |
31 | 1 | Martin Kuemmel | * "module avail" |
32 | 1 | Martin Kuemmel | |
33 | 1 | Martin Kuemmel | Adding more software is not a problem. |