Planned Activities

Update of Software server (sccsoft).

2024-03-02

The frontend server scc was updated.

2024-02-23

CoCalc was updated to the latest version.

2023-12-29

A main server issue was fixed.

2023-08-28

Main server outage and network failures fixed.

2023-02-16

Finished all updates by updating the data server sccdata to OpenSUSE 15.4.

2023-02-11

The frontends scc and scc2 were updated. The jupyterhub and the application server (VMs) also got an update.

2022-04-01

The Archive was not available due to maintenance of the file system.

2022-01/02

The home directories are transferred to a new server. Every user will be contacted if necessary.

2021-11-02

The home directories /home/scc, /home/newton and /data/archiv weren't available between 4pm and 10pm due to a hardware problem of the corresponding file server. Everything is up and running again.

2021-09

2021-09-22: SCCKN now has a Jupyterhub running on scc2. This is a webinterface to Jupyter and other applications running on the cluster. The URL is http://scc2.uni-konstanz.de

2021-09-19: The Matlab module configuration was updated to avoid common memory problems. At least 10 GB memory is required (-l h_vmem=10G).

2021-09-16: The file server newton failed during a major power outage on 2021-09-14. The home directories were restored from backup and are online since 2021-09-15.

2021-09-12: The data server for /data/scc and the grid engine was updated (performance issue are fixed now)

2021-09-11: The frontend server scc and scc2 were updated. Please recompile and check your software. If there a re any issues, let me know.

2021-09-10: The software server was updated

2021-09-01: Jobs will now only run on updated nodes (that means programs compiled on updated systems will not fail due to old glibc versions)

All server and nodes will be updated to the latest openSUSE version. The software modules will not be updated and should work without any changes. If not, let me know.

Also planned is to install Jupyterlab on the frontend server during the update. This allows access to Jupyter notebooks on the cluster via a web browser.

2021-03-02

DONE: ShareLaTeX server and software update (now running OpenSUSE 15.2 and Overlead 2.5.2)

2021-02-23

DONE: Server updates on watt and newton

2020-06-29

/data/scc is available again.The performance may be a little lower for the next weeks due to necessary rebuild and backup activies.

2020-06-25

/data/scc is currently not available due to a hardware problem. The storage will be restored as soon as possible.

Changes (Oct. 2019 - Apr. 2020):

  • New frontends scc and scc2 are now available with openSUSE 15.1 and new modules (now same modules on all queues). Please check that your software works before the queue is switched to the new software and modules.
  • All server and nodes updated to openSUSE 15.1
  • Jobs send to any queue will run on updated nodes. Old jobs will continue to run.
  • We have 8 brand new HPC-nodes (with 40 Cascadelake CPU cores and 192 GB RAM each) and 5 GPU nodes (with each 8 RTX 2080Ti GPUs)!
  • The queue gpu now supports parallel environments for CPU cores. GPUs must be specified with "-l gpu=X" where X is the number of needed CPUs.
  • Extended maximum run time on queues "scc" (10 days) and "long" (120 days)  and removed queue "longer"
  • Default maximum run time increased to 7 days
  • Windows 10 VMs are supported to run any Windows software natively on the frontends

Changes (Jan 2017):

  • All software modules are recompiled for openSUSE 42.2. Recompile your own software with the new modules (if necessary). Please let me know if something is not working as before or expected. The old modules are still available by prepending "old/" (example: "module load old/gsl")
  • There is an additional queue named "old" containing nodes with AMD CPUs or Intel CPUs older than Intel Sandybridge. Please check if your jobs also run in this queue because not all modules may run on this old hardware. It is expected to be less crowded than the other queues and there are less job limits.
  • The HOME directories on SCC now have a strict quota. Please try to stay inside your quota. You can use /data/scc to store more data.