Difference: WebHome (57 vs. 58)

Revision 582014-04-03 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 35 to 35
  If you notice anything different or unusual, please notify us.
Deleted:
<
<
Wednesday, 8th Jan 2014

The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB).

Thursday, 17th Oct 2013

Yesterday evening, we performed some maintenance.

Among others, the network connection of the first submit host (hpcs01.op.umcutrecht.nl) was upgraded. It is now the same as its sister (hpcs02): two gigabits/s, both to the storage, and to the rest of the network. The machine has a new IP address as well: 143.121.195.5; your ssh-client may notice this change.

A somewhat older change: on both submit hosts, the memory limits for interactive work have been relaxed: you can now use 10GB ram, plus 2GB swapspace.

Thursday, 30th May 2013

To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here).

 

General information

Changed:
<
<
The HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, fourteen research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

>
>
The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, fifteen research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

 How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Contact details

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback