SCC

Scientific Compute Cluster  (SCC)
Platform for High Performance Computing (HPC) at the University Konstanz

Login |
 
 

Overview

The Scientific Compute Cluster in Konstanz (SCCKN) is a platform for High Performance Computing (HPC) and High Throughput Computing (HTC, "Big Data") at the University Konstanz. It provides access to computational resources for all groups of the University Konstanz (see AvailabilityParticipants). The Scientific Compute Cluster (SCC) was build in November 2013 by the Physics department and extend to an Infrastructure Platform in September 2014.
SCCKN makes it possible that every group of the University Konstanz can use and also invest into local computing resources. The SCC especially supports computing resources that are not covered by federal HPC resources (like local data storage, long running jobs, GPU computing, special software, interactive/GUI programs, flexible investment, etc.)
SCCKN is organized and managed by the Physics Department (Contact) with more than 10 years experience in HPC Cluster management (Hard- and Software) and also the biggest demand for scientific computing at the University Konstanz.

Availability

The SCC cluster was set up in Q4 2013 (Phase 1) and was first only available for supporting groups. After extending the cluster to an university-wide infrastructure platform SCCKN in Q3 2014 (Phase 5), the cluster is now available for all groups of the University Konstanz.

Time table:

  Sep. 24, 2013 End of public tender (Ausschreibung) for phase 1
Phase 0 Nov. 2013
Initial setup (installation and testing)
Phase 1
Dec. 4, 2013

Installation in P5N with one frontend server, one storage server and 16 compute nodes, fine tuning

Phase 2
Dec. 11, 2013

Added 8 compute nodes

Phase 3 Jan.-Mar. 2014

Added 11 compute nodes

Phase 4 May 2014

Included all workstations into SCC cluster queuing

  Jul. 25, 2014

End of public tender (Ausschreibung) for SCCKN

Phase 5 Sep. 2014

Extension to an University-wide infrastructure plattform (SCCKN)

   

with additional nodes (4 x 256 GB, 12 x 64 GB, 2 x GPU) and a second storage server

  Nov. 5, 2014

Kick-Off Meeting SCCKN (Slides)

Phase 6
Dec. 9/10 2014

Installation in N4 including nodes from old HPC Clusters

Phase 7
Dec. 2014

Added 12 nodes (with Haswell CPUs)

Phase 8
End of 2014

Included the old HPC clusters of the Theoretical Physics department into the queuing system of SCC

Phase 9 Aug. 20, 2015 Optimized performance and stability of the storage system of /work and /home/scc.
Phase 10 Aug. 20, 2015 Installed a new backup server for all simulation data. All data spaces will be backup'ed here in the next weeks.

Phase 11

Jan. 21, 2016

Added 8 nodes (with 12 core Haswell-CPUs)

Phase 12 Jan. 22, 2016 Added 8 Sandy Bridge nodes from old clusters
Phase 13 Feb. 2, 2016 Added 4 Haswell nodes with 512 GB  RAM each
Phase 14 Feb. 24, 2016 Added 2 Haswell nodes with two K80 GPUs
Phase 15 May 27, 2016 Updated external network connection to 4x10 Gbit/s
Phase 16 Jan. 2017 Software upgrade of all server and nodes including software modules
Phase 17 Jan. 2017 Adding 12 Broadwell nodes with 512 GB RAM each and 4 Broadwell nodes with 128 GB RAM

Summary (Phase 15)

  • ca. 61.7 TFLOPS (SCC: 49.0 TFLOPS, Workstations: 8.1 TFLOPS, Old Cluster: 4.6 TFLOPS)
  • 2772 Cores, max. 1.996.000 Core-h/month (SCC: 2028 Cores, 1.460.000 Core-h/month)
  • ca. 18.2 TB RAM (SCC: 17.4 TB)
  • 240 TB Storage + ca. 200 TB local disc
  • Backup server (256 TB storage) for all simulation data
  • Infiniband FDR connection (56 GBit/s, 0.7 µs) between most nodes
  • 4x10 GE connection to University LAN
  • openSUSE 12.3 with many additional repositories
  • Modules environment
  • Grid Engine queuing system
  • Zabbix monitoring
  • Remote Management using IPMI