Difference: WebHome (62 vs. 63)

Revision 632014-12-15 - PatrickKemmeren

Line: 1 to 1

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 12 to 12
 (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)



Thursday, 13th Feb 2014

Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100,000.

Thursday, 6th Feb 2014

The number of slots per queue has been adjusted, like discussed on the HPC Usercouncil meeting. The new settings are:

  • veryshort: 12 slots per node
  • short: 9
  • medium: 7
  • long: 4
  • verylong: 1

Wednesday, 5th Feb 2014

/home and /hpc/local are now served from a different server. This should improve the interactive responsiveness when the cluster is heavily used.

If you notice anything different or unusual, please notify us.


General information

The HPC cluster currently consists of 60 compute nodes (720 cores, 7.5TB working memory) and 260TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback