Access

Is it possible run jobs from two submit hosts (aka access hosts):

labsrv7.math.unipd.it

labsrv8.math.unipd.it

You can connect to these hosts via ssh from any host internal to the Department of Mathematics. To connect from the general internet you have to perform as first step an ssh connection to riemann.math.unipd.it or labta.math.unipd.it or guestportal.math.unipd.it (depending from your status: faculty member, student, guest) and then connect to the submit hosts.

Computing resources

The cluster is made up of 28 computing nodes. Some nodes are equipped with a CUDA card. Please, note the column ‘Labels’, this declare the "Features" (in SLURM terms) that can be used to select the nodes with the ‘--constraint=LABEL‘ switch (see the examples). This is the hardware list:

Node CPU GPU RAM #Cores Labels (aka Features) Connectivity LocalStorage
hpblade01 4 x Eight-Core Intel(R) Xeon(R) CPU E5-4640 0 @ 2.40GHz none 256GB 32 hpblade01, matlab Ethernet 1GB 200GB
hpblade04 2 x Intel(R) Xeon(R) CPU E5520 @ 2.27GHz none 32GB 8 hpblade04, matlab Ethernet 1GB 50GB
hpblade05 2 x Intel(R) Xeon(R) CPU E5520 @ 2.27GHz none 32GB 8 hpblade05 Ethernet 1GB 50GB
hpblade06 2 x Intel(R) Xeon(R) CPU E5520 @ 2.27GHz none 32GB 8 hpblade06 Ethernet 1GB 50GB
hpblade07 2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz none 64GB 12 hpblade07, matlab Ethernet 1GB 50GB
hpblade08 2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz none 96GB 12 hpblade08, matlab Ethernet 1GB 50GB
hpblade12 2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz none 32GB 8 hpblade12, matlab Ethernet 1GB 50GB
hpblade13 2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz none 32GB 8 hpblade13, matlab Ethernet 1GB 50GB
hpblade16 2 x Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz none 256GB 16 hpblade16 Ethernet 1GB 80GB
gpu03 (inactive) 2 x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz 1 x NVidia K20 128GB 16 gpu03, K20, kepler Ethernet 1GB 500GB
gpu04 2 x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz 1 x NVidia K20 128GB 16 gpu04, K20, kepler Ethernet 1GB 500GB
dellcuda0 2 x Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz 1 x Nvidia V100 192GB 16 dellcuda0, matlab, V100, cudadrv495, volta Ethernet 10GB 200GB
dellcuda1 2 x Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz 1 x Nvidia A100 192GB 16 dellcuda1, matlab, A100, cudadrv495, ampere Ethernet 10GB 200GB
dellcuda2 2 x AMD EPYC 7301 16-Core 1 x Nvidia V100 256GB 32 dellcuda2, V100, cudadrv495, volta Ethernet 10GB 500GB
dellsrv0 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz none 160GB 20 dellsrv0, matlab Ethernet 10GB 200GB
dellsrv1 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 1 x Nvidia V100 160GB 20 dellsrv1, matlab, V100, cudadrv470, volta Ethernet 10GB 200GB
dellsrv2 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 1 x Nvidia T4 160GB 20 dellsrv2, matlab, T4, cudadrv495, turing Ethernet 10GB 200GB
dellsrv3 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 1 x Nvidia T4 160GB 20 dellsrv3, matlab, T4, cudadrv495, turing Ethernet 10GB 200GB
dellsrv4 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 1 x Nvidia T4 160GB 20 dellsrv4, matlab, T4, cudadrv510, turing Ethernet 10GB 200GB
dellsrv5 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 1 x Nvidia T4 160GB 20 dellsrv5, matlab, T4, cudadrv510, turing Ethernet 10GB 200GB
dellsrv6 2 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz 1 x Nvidia T4 160GB 20 dellsrv6, matlab, T4, cudadrv510, turing Ethernet 10GB 200GB
vgpu0-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu0-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu1-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu1-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu2-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu2-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu3-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu3-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu4-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu4-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu5-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu5-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu0-1 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu0-1, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu1-1 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu1-1, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu2-1 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu2-1, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu3-1 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu3-1, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu4-1 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu4-1, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu5-1 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 60GB 4 vgpu5-1, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu6-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A6000 120GB 6 vgpu6-0, A6000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu7-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A6000 120GB 6 vgpu7-0, A6000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu8-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A5000 120GB 6 vgpu8-0, A5000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu9-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A6000 120GB 6 vgpu9-0, A6000, cudadrv510, ampere Ethernet 1GB 20GB
vgpu10-0 1 x Intel(R) Xeon(R) 1 x Nvidia RTX A6000 120GB 6 vgpu10-0, A6000, cudadrv510, ampere Ethernet 1GB 20GB

There are 392 cpu-cores and 26 gpu. As previously the field ‘Storage of the table describes the amount dof disk space available locally for every node for temporary storage of the intermediate results of the computations.

Storage

How stated before every node has an average of 50Gb of local disk space, other storage can be accessed via the network.
The table below describes the various storage unit with the ‘mount directory’ that has to be used for the access:

Generic Name Size (TB) Availability Mount directory Connection
Home 35 All users /home Ethernet 10GB
Storage 34 All users (on request) /storage Ethernet 10GB