The SCC cluster was set up in Q4 2013 (Phase 1) and was first only available for supporting groups. After extending the cluster to an university-wide infrastructure platform SCCKN in Q3 2014 (Phase 5), the cluster is now available for all groups of the University Konstanz.

Time table:

     
  Sep. 24, 2013 End of public tender (Ausschreibung) for phase 1
Phase 0 Nov. 2013 Initial setup (installation and testing)
Phase 1 Dec. 4, 2013 Installation in P5N with one frontend server, one storage server and 16 compute nodes, fine tuning
Phase 2 Dec. 11, 2013 Added 8 compute nodes
Phase 3 Jan.-Mar. 2014 Added 11 compute nodes
Phase 4 May 2014 Included all workstations into SCC cluster queuing
  Jul. 25, 2014 End of public tender (Ausschreibung) for SCCKN
Phase 5 Sep. 2014 Extension to an University-wide infrastructure plattform (SCCKN)
    with additional nodes (4 x 256 GB, 12 x 64 GB, 2 x GPU) and a second storage server
  Nov. 5, 2014 Kick-Off Meeting SCCKN (Slides)
Phase 6 Dec. 9/10 2014 Installation in N4 including nodes from old HPC Clusters
Phase 7 Dec. 2014 Added 12 nodes (with Haswell CPUs)
Phase 8 End of 2014 Included the old HPC clusters of the Theoretical Physics department into the queuing system of SCC
Phase 9 Aug. 20, 2015 Optimized performance and stability of the storage system of /work and /home/scc.
Phase 10 Aug. 20, 2015 Installed a new backup server for all simulation data. All data spaces will be backup'ed here in the next weeks.

Phase 11

Jan. 21, 2016 Added 8 nodes (with 12 core Haswell-CPUs)
Phase 12 Jan. 22, 2016 Added 8 Sandy Bridge nodes from old clusters
Phase 13 Feb. 2, 2016 Added 4 Haswell nodes with 512 GB  RAM each
Phase 14 Feb. 24, 2016 Added 2 Haswell nodes with two K80 GPUs
Phase 15 May 27, 2016 Updated external network connection to 4x10 Gbit/s
Phase 16 Jan. 2017 Software upgrade of all server and nodes including software modules (see here for details)
Phase 17 Feb. 08, 2017 Added 12 Broadwell nodes with 512 GB RAM each and 4 Broadwell nodes with 128 GB RAM
Phase 18 Feb. 20, 2017 Added nodes from the older clusters HPC and HYDRA and collected them in the queue "old"
Phase 19 June 2017 Extended backup server for archiving data
Phase 20 July 2017 Welcome our new website
Phase 21 Dec. 2017 Added latest Intel Parallel Studio XE Cluster
Phase 22 Mar. 2018 Added new storage server and extend archive to increase capacity
Phase 23 Mar. 2018 Added 28 new Skylake SP nodes (each 2x16 Cores, 192 GB RAM)
Phase 24 Jun. 2018 Added Xeon Phi node (S7210, 192 GB, 64 Cores)
Phase 25 Nov. 2018 Added new GPU-node with 2xTesla V100
Phase 26 Jan. 2019 Added 2xTesla V100
Phase 27 Oct. 2019 Added new frontend scc2 (2x20 Cores, 768 GB RAM)
Phase 28 Nov 2019

Add 8 HPC nodes (2xIntel Gold 6248, 40 Cores each) and 5 GPU nodes (AMD EPYC, 8xRTX 2080Ti each).

Update installation and modules to openSUSE 15.1.

Phase 29 Sep. 2021

Added 4 HPC nodes (each 2xAMD 64 Core).

Update installation to openSUSE 15.3.

Installed Jupyterhub.

Phase 30 Feb. 2023

Added 8 HPC nodes (each 2xAMD 64 Core).

Update installation to openSUSE 15.4.

Updated Jupyterhub and installed CoCalc.

Phase 31 Nov. 2023

Added two new storage server (sccdata4 and sccback2) with 640 TB storage and 1024 GB RAM each.