HPC: managed clusters

From ICT science
Jump to navigation Jump to search

The Science Faculty has several High Performance Computing-facilities available.

Thor

From:

  • 20141203

Main Users:

  • Debye Institute - Soft Condensed Matter (Marjolein Dijkstra)

Location:

Resources:

  • 1 master/storage node (24 cores)
  • 9 compute nodes (9*48 = 432 cores) remain.
  • 2.66 GB /core
  • 11 TB

OS

  • CentOS 6.6 (nodes 6.5)

Clustermanager:

  • Rocks 6.1.1

Queue-manager:


Nano

From:

  • 20121214 [head,01..20] ; 20141216 [21..28]

Main Users:

  • Debye Institute - Soft Condensed Matter (Marijn van Huis)

Location:

Resources:

  • 1 master/storage node (8 cores)
  • 28 compute nodes (28*16 = 448 cores; 20*32GB (2GB/core) + 8*64 (4GB/core) = 1152 GB)
  • 72 TB

OS:

  • Scientific Linux 6.5

ClusterManager:

Queues:

  • all.q, Q2, Q4, stud.q

Queue-manager:


Hera

From:

  • 2015-12-17 [Head, 01..20] upto 2020-11-20 [89..92]

Main Users:

  • Sectie Nanophotonics van Debye Institute

Location:

Resources:

  • 1 master/storage node (8 cores; 32 GB; 37 TB)
  • 36 compute nodes (16 cores; 64 GB; 1 TB) [A]
  • 24 compute nodes (20 cores; 64 GB; 1 TB) [B]
  • 16 compute nodes (24 cores; 96 GB; 480G SSD) [C]
  • 8 compute nodes (32 cores: 96 GB; 480G SSD) [D]
  • 4 compute nodes (32 cores: 96 GB; 480G SSD) [E]
  • (= 1824 cores; 3.5 GB/core)

OS:

  • CentOS 7.2 (0,A,B), 7.6 (C,D), 7.9(E))

ClusterManager:

Queues:

  • all.q
  • lammps.q

Queue-manager:


Obi1

From:

  • 2014-02-14

Main Users:

  • Debye Institute - Computational Chemistry

Co-Administration:

  • Jaap Louwen, Ellen Sterk

Location:

Resources:

  • 1 master/storage node (8 cores; 64 GB; 15 TB)
  • 10 compute nodes (10*16 cores=160 cores; 320 GB - 2G/core)

OS:

  • Scientific Linux 6.9

Clustermanager

Queues:

  • all.q  ; long.q ; small.q ; test.q

Queue-manager:


Gemini

From:

  • 2016-11-21 {35,36}
  • 2019-11-30 {37,38}
  • 2020-12-01 {39}
  • 2022-01-27 {40,41,42}

Main Users:

  • All staff and students of science faculty (* some externals *)

Location:

Resources:

  • Headnode (256 GB - 64 HT-cores) - 2TB scratch)
  • 1 nodes (256 G - 32 cores - 8G/core - 2TB scratch)
  • 2 nodes (192 G - 48 cores - 4GB/core - 2TB scratch)
  • 1 node (256 GB - 48 cores - 5GB/core - 12TB scratch)
  • 3 nodes (256 G - 48 cores - 5GB/core - 2 TB SSD scratch)

OS:

  • Scientific Linux 7.9

Queue-manager:

Queues:

  • all.q  ; long.q ; test.q ; itf.q

More info:


Angstrom

From:

  • 2017-12-22

Main Users:

  • Biomolecular Sciences - Cryo/EM

Location:

Resources:

  • 1 master node
  • 10 genuine nodes (a 20 cores; 128 GB RAM, 250 GB disk)
  • 3 GPU nodes (72 cores & 57.344 CUDA cores)
  • 2 storage nodes (520 TB)
  • 1 backup server (750 TB)

OS:

  • CentOS 7.4

Clustermanager:

Queue-manager:

More info:


Pico

From:

  • 2020-01-13

Main Users:

  • Debye Institute - Soft Condensed Matter (Marijn van Huis)

Location:

Resources:

  • 1 master/storage node (40 cores; 192 GB RAM; 100 TB disk)
  • 12 nodes (40 cores; 96 GB RAM, 1 TB disk)

OS:

  • CentOS 7.7

Clustermanager:

Queue-manager:

More info:


Helix

From:

  • 2020-01-14

Main Users:

  • Biomolecular Sciences - Cryo/EM

Location:

Resources:

  • 1 master node
  • 3 GPU nodes (72 cores & 57.344 cuda cores)
  • 2 Visualisation Servers
  • 1 storage nodes (520 TB)
  • 1 backup server (750 TB)

OS:

  • CentOS 7.7

Clustermanager:

Queue-manager:

More info:


Lorenz

From:

  • 20210824

Main Users:

  • IMAU

Location:

Resources:

  • 1 master/storage node (40 cores; 192 GB RAM; 2* 120 TB disk)
  • 6 nodes (32 cores; 96 GB RAM, 480 G OS, 4 TB scratch)

OS:

  • AlmaLinux 8.5

Clustermanager:

Queue-manager:


Odin

From:

  • 20220311

Main Users:

  • Debye Institute for Nanomaterials Science - Laura Filion

Location:

Resources:

  • 1 master/storage node (40 cores; 192 GB RAM; 2* 120 TB disk)
  • 6 nodes (32 cores; 96 GB RAM, 1 TB disk)

OS:

  • AlmaLinux 8.5

Clustermanager:

Queue-manager:


Aenetone

From:

  • 20220304

Main Users:

  • Debye Institute - Nong Artrith

Location:

Resources:

  • 1 master/storage node (128 cores (MT); 256 GB RAM; 120 TB disk)
  • 11 nodes (64 cores; 256 GB RAM, 1 TB disk)
  • 2 nodes (64 cores; 512 GB RAM, 1 TB disk)

OS:

  • AlmaLinux 8.5

Clustermanager:

Queue-manager:


Laura

From:

  • 2012-06-15

Main Users:

  • CS Department(Geert-Jan Giezeman)

Location:

Resources:

  • 1 node (32 cores - 64 G - 2G/core)

OS

  • Scientific Linux 7.9

Queue manager:

  • none

Markov

(= clue.science.uu.nl) From:

  • 20141211

Main Users:

  • IMAU

Location:

Resources:

  • 1 compute node
  • 48 cores ; 64 GB (= 1.33 G/core)
  • linked to : netapp

OS:

  • Scientific Linux 7.9

ClusterManager:

  • none

Queuemanager:

More info: