Difference: WebHome (60 vs. 61)

Revision 612014-06-12 - PatrickKemmeren

Line: 1 to 1

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 39 to 39

General information

The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, sixteen research groups are actively using the HPC cluster.

Participating groups. Currently, seventeen research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback