Difference: WebHome (1 vs. 85)

Revision 852021-11-18 - ReneJanssen

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 24 to 24
 The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport.
Deleted:
<
<

Price list

1 CPU share (€ 1200) : ~50.000 CPU hrs
1 GPU share (€ 1200): ~5.000 GPU hrs (includes 6 CPUs)
1 TB non-redundant high-performance storage (€ 180/TB/year)
1 TB non-redundant low-performance/archive storage (€45/TB/year)

 

Contact details

The HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.

Revision 842021-11-17 - ReneJanssen

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 24 to 24
 The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport.
Added:
>
>

Price list

1 CPU share (€ 1200) : ~50.000 CPU hrs
1 GPU share (€ 1200): ~5.000 GPU hrs (includes 6 CPUs)
1 TB non-redundant high-performance storage (€ 180/TB/year)
1 TB non-redundant low-performance/archive storage (€45/TB/year)

 

Contact details

The HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.

Revision 832020-03-03 - ReneJanssen

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 49 to 49
 A general overview about the HPC cluster is provided here (password required):

No permission to view HPC

Added:
>
>
 
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1397466390" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2775753" user="patrick" version="4"
META FILEATTACHMENT attachment="UBC_logo_RGB_400x178.png" attr="" comment="" date="1452268952" name="UBC_logo_RGB_400x178.png" path="UBC_logo_RGB_400x178.png" size="12435" user="patrick" version="1"
META FILEATTACHMENT attachment="HPC_Flyer.png" attr="" comment="" date="1452270064" name="HPC_Flyer.png" path="HPC_Flyer.png" size="510066" user="patrick" version="1"
Added:
>
>
META FILEATTACHMENT attachment="HPC_user_counsil_20200303.pptx" attr="" comment="" date="1583244150" name="HPC_user_counsil_20200303.pptx" path="HPC_user_counsil_20200303.pptx" size="1086591" user="rjanssen" version="1"

Revision 822019-11-12 - MartinMarinus

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Changed:
<
<
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Trash/Bioinformatics_old Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
>
>
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht/Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

Revision 812017-06-30 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Changed:
<
<
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics_old Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
>
>
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Trash/Bioinformatics_old Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

Revision 802017-02-07 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Changed:
<
<
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
>
>
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics_old Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

Revision 792016-08-03 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 13 to 13
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-eigth research groups are actively using the HPC cluster.
>
>
Participating groups. Currently, twenty-nine research groups are actively using the HPC cluster.
  HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Revision 782016-07-11 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 13 to 13
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-seven research groups are actively using the HPC cluster.
>
>
Participating groups. Currently, twenty-eigth research groups are actively using the HPC cluster.
  HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Revision 772016-05-18 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 13 to 13
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-six research groups are actively using the HPC cluster.
>
>
Participating groups. Currently, twenty-seven research groups are actively using the HPC cluster.
  HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Revision 762016-03-10 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 9 to 9
 

General information

Changed:
<
<
The HPC facility consists of 1544 cores, 10TB working memory and 600TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
>
>
The HPC facility consists of 1544 cores and 600TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).

Revision 752016-02-23 - ReneJanssen

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Changed:
<
<
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
>
>
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

General information

Changed:
<
<
The HPC facility consists of 1200 cores, 10TB working memory and 600TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
>
>
The HPC facility consists of 1544 cores, 10TB working memory and 600TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).

Revision 742016-02-23 - ReneJanssen

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 28 to 28
  The HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.
Deleted:
<
<

HPC infrastructure

A general overview about the HPC cluster is provided here (password required):

No permission to view HPC

 

First-time users

To get you started, some initial information is provided here (password required):

Line: 49 to 44
 

No permission to view HPC

Added:
>
>

HPC infrastructure

A general overview about the HPC cluster is provided here (password required):

No permission to view HPC

 
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1397466390" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2775753" user="patrick" version="4"
META FILEATTACHMENT attachment="UBC_logo_RGB_400x178.png" attr="" comment="" date="1452268952" name="UBC_logo_RGB_400x178.png" path="UBC_logo_RGB_400x178.png" size="12435" user="patrick" version="1"
META FILEATTACHMENT attachment="HPC_Flyer.png" attr="" comment="" date="1452270064" name="HPC_Flyer.png" path="HPC_Flyer.png" size="510066" user="patrick" version="1"

Revision 732016-02-16 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 9 to 9
 

General information

Changed:
<
<
The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
>
>
The HPC facility consists of 1200 cores, 10TB working memory and 600TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).

Revision 722016-02-16 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Changed:
<
<
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht and the Hubrecht Institute.
>
>
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology.
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

General information

Changed:
<
<
The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
>
>
The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-four research groups are actively using the HPC cluster.
>
>
Participating groups. Currently, twenty-six research groups are actively using the HPC cluster.
  HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Revision 712016-02-02 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 13 to 13
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-three research groups are actively using the HPC cluster.
>
>
Participating groups. Currently, twenty-four research groups are actively using the HPC cluster.
  HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Revision 702016-01-29 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 13 to 13
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-two research groups are actively using the HPC cluster.
>
>
Participating groups. Currently, twenty-three research groups are actively using the HPC cluster.
  HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Revision 692016-01-14 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) Facility wiki

HPC Flyer.png
Line: 13 to 13
  Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
Changed:
<
<
Participating groups. Currently, twenty-two research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

>
>
Participating groups. Currently, twenty-two research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

 How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Conditions and Support

Line: 24 to 26
 

Contact details

Changed:
<
<
The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.
>
>
The HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.
 

HPC infrastructure

Revision 682016-01-08 - PatrickKemmeren

Line: 1 to 1
Changed:
<
<

Welcome to the High-Performance Computing (HPC) wiki

>
>

Welcome to the High-Performance Computing (HPC) Facility wiki

 
Changed:
<
<
HPCflyer rasterized large.png
>
>
HPC Flyer.png
 
Changed:
<
<
The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.

The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. Take your time and have a look around.

The second part of the wiki contains useful practical information for users of the HPC cluster.

>
>
The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht and the Hubrecht Institute.
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

General information

Changed:
<
<
The HPC cluster currently consists of 61 compute nodes (770 cores, 8TB working memory) and 220/490TB of HPC storage. Funded and used by UMC (6 divisions) and various groups at Utrecht University and the Hubrecht Laboratory), totalling 19 research groups.
>
>
The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel.
 
Changed:
<
<
A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

>
>
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS).
  Participating groups. Currently, twenty-two research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Line: 52 to 48
 

No permission to view HPC

META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1397466390" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2775753" user="patrick" version="4"
Added:
>
>
META FILEATTACHMENT attachment="UBC_logo_RGB_400x178.png" attr="" comment="" date="1452268952" name="UBC_logo_RGB_400x178.png" path="UBC_logo_RGB_400x178.png" size="12435" user="patrick" version="1"
META FILEATTACHMENT attachment="HPC_Flyer.png" attr="" comment="" date="1452270064" name="HPC_Flyer.png" path="HPC_Flyer.png" size="510066" user="patrick" version="1"

Revision 672015-10-19 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 17 to 17
  A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, twenty-one research groups are actively using the HPC cluster.

>
>
Participating groups. Currently, twenty-two research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 662015-09-18 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 17 to 17
  A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
* Participating groups.* Currently, nineteen research groups are actively using the HPC cluster.

>
>
Participating groups. Currently, twenty-one research groups are actively using the HPC cluster.

 * HPC user council.* To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 652015-06-17 - PhilipLijnzaad

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Deleted:
<
<
 
HPCflyer rasterized large.png

The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.

Line: 12 to 11
  (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)
Deleted:
<
<

 

General information

Changed:
<
<
The HPC cluster currently consists of 60 compute nodes (720 cores, 7.5TB working memory) and 260TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

>
>
The HPC cluster currently consists of 61 compute nodes (770 cores, 8TB working memory) and 220/490TB of HPC storage. Funded and used by UMC (6 divisions) and various groups at Utrecht University and the Hubrecht Laboratory), totalling 19 research groups.

A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

 Participating groups. Currently, nineteen research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

Changed:
<
<
How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
>
>
How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
 

Conditions and Support

Added:
>
>
 The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport.

Revision 642015-05-08 - PhilipLijnzaad

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Added:
>
>
 
HPCflyer rasterized large.png

The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.

Revision 632014-12-15 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 12 to 12
 (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)


Deleted:
<
<

News

Thursday, 13th Feb 2014

Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100,000.

Thursday, 6th Feb 2014

The number of slots per queue has been adjusted, like discussed on the HPC Usercouncil meeting. The new settings are:

  • veryshort: 12 slots per node
  • short: 9
  • medium: 7
  • long: 4
  • verylong: 1

Wednesday, 5th Feb 2014

/home and /hpc/local are now served from a different server. This should improve the interactive responsiveness when the cluster is heavily used.

If you notice anything different or unusual, please notify us.

 

General information

The HPC cluster currently consists of 60 compute nodes (720 cores, 7.5TB working memory) and 260TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Revision 622014-11-07 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 38 to 38
 

General information

Changed:
<
<
The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, seventeen research groups are actively using the HPC cluster.

>
>
The HPC cluster currently consists of 60 compute nodes (720 cores, 7.5TB working memory) and 260TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, nineteen research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 612014-06-12 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 39 to 39
 

General information

The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, sixteen research groups are actively using the HPC cluster.

>
>
Participating groups. Currently, seventeen research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 602014-05-06 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 17 to 17
  Thursday, 13th Feb 2014
Changed:
<
<
Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100.000.
>
>
Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100,000.
  Thursday, 6th Feb 2014
Line: 39 to 39
 

General information

The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, fifteen research groups are actively using the HPC cluster.

>
>
Participating groups. Currently, sixteen research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 592014-04-14 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 43 to 43
 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
Added:
>
>

Conditions and Support

The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport.
 

Contact details

The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.

Line: 68 to 72
 

No permission to view HPC

Changed:
<
<
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1391866279" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2774912" user="patrick" version="2"
>
>
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1397466390" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2775753" user="patrick" version="4"

Revision 582014-04-03 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 35 to 35
  If you notice anything different or unusual, please notify us.
Deleted:
<
<
Wednesday, 8th Jan 2014

The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB).

Thursday, 17th Oct 2013

Yesterday evening, we performed some maintenance.

Among others, the network connection of the first submit host (hpcs01.op.umcutrecht.nl) was upgraded. It is now the same as its sister (hpcs02): two gigabits/s, both to the storage, and to the rest of the network. The machine has a new IP address as well: 143.121.195.5; your ssh-client may notice this change.

A somewhat older change: on both submit hosts, the memory limits for interactive work have been relaxed: you can now use 10GB ram, plus 2GB swapspace.

Thursday, 30th May 2013

To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here).

 

General information

Changed:
<
<
The HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, fourteen research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

>
>
The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, fifteen research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments.

 How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Contact details

Revision 572014-02-13 - MartinMarinus

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 15 to 15
 

News

Added:
>
>
Thursday, 13th Feb 2014

Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100.000.

 Thursday, 6th Feb 2014

The number of slots per queue has been adjusted, like discussed on the HPC Usercouncil meeting. The new settings are:

Revision 562014-02-08 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 79 to 79
 

No permission to view HPC

Changed:
<
<
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1381756041" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2766279" user="patrick" version="1"
>
>
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1391866279" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2774912" user="patrick" version="2"

Revision 552014-02-06 - MartinMarinus

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 15 to 15
 

News

Added:
>
>
Thursday, 6th Feb 2014

The number of slots per queue has been adjusted, like discussed on the HPC Usercouncil meeting. The new settings are:

  • veryshort: 12 slots per node
  • short: 9
  • medium: 7
  • long: 4
  • verylong: 1

Wednesday, 5th Feb 2014

/home and /hpc/local are now served from a different server. This should improve the interactive responsiveness when the cluster is heavily used.

If you notice anything different or unusual, please notify us.

 Wednesday, 8th Jan 2014

The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB).

Revision 542014-01-08 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 15 to 15
 

News

Added:
>
>
Wednesday, 8th Jan 2014

The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB).

 Thursday, 17th Oct 2013

Yesterday evening, we performed some maintenance.

Revision 532013-12-16 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 30 to 30
 

General information

The HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, thirteen research groups are actively using the HPC cluster.

>
>
Participating groups. Currently, fourteen research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 522013-11-27 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 29 to 29
 

General information

Changed:
<
<
The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

>
>
The HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

 Participating groups. Currently, thirteen research groups are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 512013-11-05 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 30 to 30
 

General information

The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, eleven research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

>
>
Participating groups. Currently, thirteen research groups are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 502013-10-24 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 30 to 30
 

General information

The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, ten research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

>
>
Participating groups. Currently, eleven research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 492013-10-17 - MartinMarinus

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 15 to 15
 

News

Changed:
<
<
Thursday, 30th May 2013
>
>
Thursday, 17th Oct 2013
 
Changed:
<
<
To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here).
>
>
Yesterday evening, we performed some maintenance.
 
Changed:
<
<
Monday, 27th May 2013
>
>
Among others, the network connection of the first submit host (hpcs01.op.umcutrecht.nl) was upgraded. It is now the same as its sister (hpcs02): two gigabits/s, both to the storage, and to the rest of the network. The machine has a new IP address as well: 143.121.195.5; your ssh-client may notice this change.
 
Changed:
<
<
Compute node 25 is back online! After a period of repeated hardware failures, we're happy to report that the server has been completely replaced and is up and running again.
>
>
A somewhat older change: on both submit hosts, the memory limits for interactive work have been relaxed: you can now use 10GB ram, plus 2GB swapspace.
 
Changed:
<
<
Thursday, 16th May 2013
>
>
Thursday, 30th May 2013
 
Changed:
<
<
As dicussed in the user council meeting of this afternoon, we made the following changes. The number of slots available for the veryshort queue is 12 per compute node, short 10, medium 8, long 4, verylong 1. In addition, all compute nodes are now able to submit jobs.
>
>
To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here).
 

General information

Revision 482013-10-15 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 30 to 30
 

General information

The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Changed:
<
<
Participating groups. Currently, nine research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

>
>
Participating groups. Currently, ten research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 472013-10-14 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 59 to 59
 

No permission to view HPC

Changed:
<
<
META FILEATTACHMENT attachment="researchict_logo.png" attr="h" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368695287" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer.png" attr="h" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"
>
>
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1381756041" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2766279" user="patrick" version="1"

Revision 462013-10-11 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 29 to 29
 

General information

Changed:
<
<
The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

>
>
The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, nine research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

 HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.

Revision 452013-05-31 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png

Revision 442013-05-31 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 29 to 29
 

General information

Changed:
<
<
The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis.
>
>
The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
 

Contact details

Revision 432013-05-31 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 52 to 52
 

Software installation

Changed:
<
<
The working team HPC installs and maintains software that is of general interest to HPC users. In addition, everybody may install user- or group-specific software. For more details, see here.
>
>
The HPC infrastructure provides the basis to install any software that is needed by users. More details are provided here (password required).
 

No permission to view HPC

META FILEATTACHMENT attachment="researchict_logo.png" attr="h" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"

Revision 422013-05-30 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 50 to 50
 A useful collection of How to's is provided here (password required):

No permission to view HPC

Added:
>
>

Software installation

The working team HPC installs and maintains software that is of general interest to HPC users. In addition, everybody may install user- or group-specific software. For more details, see here.

No permission to view HPC

 
META FILEATTACHMENT attachment="researchict_logo.png" attr="h" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368695287" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"

Revision 412013-05-30 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 17 to 17
  Thursday, 30th May 2013
Changed:
<
<
A second submit host is online! Go to add for more information.
>
>
To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here).
  Monday, 27th May 2013

Revision 402013-05-30 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 12 to 12
 (for a high-resolution version of the HPC flyer, click on the thumbnail on the left)


Added:
>
>
 

News

Added:
>
>
Thursday, 30th May 2013

A second submit host is online! Go to add for more information.

 Monday, 27th May 2013

Compute node 25 is back online! After a period of repeated hardware failures, we're happy to report that the server has been completely replaced and is up and running again.

Revision 392013-05-27 - MartinMarinus

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 9 to 9
  The second part of the wiki contains useful practical information for users of the HPC cluster.
Changed:
<
<
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)
>
>
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)
 
Added:
>
>

 

News

Changed:
<
<
Thursday, 16th May 2013
>
>
Monday, 27th May 2013

Compute node 25 is back online! After a period of repeated hardware failures, we're happy to report that the server has been completely replaced and is up and running again.

Thursday, 16th May 2013

 As dicussed in the user council meeting of this afternoon, we made the following changes. The number of slots available for the veryshort queue is 12 per compute node, short 10, medium 8, long 4, verylong 1. In addition, all compute nodes are now able to submit jobs.

General information

Revision 382013-05-16 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer rasterized large.png
Line: 7 to 7
  The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. Take your time and have a look around.
Changed:
<
<
The second part of the wiki contains useful practical information for users of the HPC cluster.
>
>
The second part of the wiki contains useful practical information for users of the HPC cluster.

(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)

News

Thursday, 16th May 2013
As dicussed in the user council meeting of this afternoon, we made the following changes. The number of slots available for the veryshort queue is 12 per compute node, short 10, medium 8, long 4, verylong 1. In addition, all compute nodes are now able to submit jobs.
 

General information

Revision 372013-05-16 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Changed:
<
<
HPCflyer_rasterized.png
>
>
HPCflyer rasterized large.png
  The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.

The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. Take your time and have a look around.

Changed:
<
<
The second part of the wiki contains useful practical information for users of the HPC cluster.

For a high-resolution image of the flyer go here.

HPCflyer rasterized large.png
>
>
The second part of the wiki contains useful practical information for users of the HPC cluster.
 

General information

Revision 362013-05-16 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

Line: 11 to 11
  For a high-resolution image of the flyer go here.
Added:
>
>
HPCflyer rasterized large.png
 

General information

The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis.

Revision 352013-05-16 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

Line: 11 to 11
  For a high-resolution image of the flyer go here.
Deleted:
<
<
%THUMBVIEW{ "HPCflyer_rasterized_large.png" }%

%THUMBVIEW{ "isilon_performance1.png" }%

 

General information

The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis.

Line: 42 to 38
 
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368695287" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer.png" attr="h" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"
Deleted:
<
<
META FILEATTACHMENT attachment="isilon_performance.png" attr="" comment="" date="1368695443" name="isilon_performance.png" path="isilon_performance.png" size="80491" user="patrick" version="1"
META FILEATTACHMENT attachment="isilon_performance1.png" attr="" comment="" date="1368695729" name="isilon_performance1.png" path="isilon_performance1.png" size="91953" user="patrick" version="1"

Revision 342013-05-16 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

Line: 13 to 13
  %THUMBVIEW{ "HPCflyer_rasterized_large.png" }%
Added:
>
>
%THUMBVIEW{ "isilon_performance1.png" }%
 

General information

The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis.

Line: 40 to 42
 
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368695287" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer.png" attr="h" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"
Added:
>
>
META FILEATTACHMENT attachment="isilon_performance.png" attr="" comment="" date="1368695443" name="isilon_performance.png" path="isilon_performance.png" size="80491" user="patrick" version="1"
META FILEATTACHMENT attachment="isilon_performance1.png" attr="" comment="" date="1368695729" name="isilon_performance1.png" path="isilon_performance1.png" size="91953" user="patrick" version="1"

Revision 332013-05-16 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

Line: 9 to 9
  The second part of the wiki contains useful practical information for users of the HPC cluster.
Changed:
<
<
For a high-resolution image of the flyer go here.
 
>
>
For a high-resolution image of the flyer go here.

%THUMBVIEW{ "HPCflyer_rasterized_large.png" }%

 

General information

Line: 36 to 38
 
META FILEATTACHMENT attachment="researchict_logo.png" attr="h" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
Changed:
<
<
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368532753" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
>
>
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368695287" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
 
META FILEATTACHMENT attachment="HPCflyer.png" attr="h" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"

Revision 322013-05-16 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Changed:
<
<
HPCflyer_rasterized.png
>
>
HPCflyer_rasterized.png
  The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.
Line: 9 to 9
  The second part of the wiki contains useful practical information for users of the HPC cluster.
Changed:
<
<
For a high-resolution image of the flyer go here.


>
>
For a high-resolution image of the flyer go here.
 
 

General information

Line: 36 to 34
 A useful collection of How to's is provided here (password required):

No permission to view HPC

Deleted:
<
<

HPC blog

Warning: Can't find topic HPC.BlogPost

 
META FILEATTACHMENT attachment="researchict_logo.png" attr="h" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368532753" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"

Revision 312013-05-15 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

Line: 13 to 13
 
Deleted:
<
<

HPC blog

Warning: Can't find topic HPC.BlogPost

 

General information

The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis.

Line: 47 to 36
 A useful collection of How to's is provided here (password required):

No permission to view HPC

Changed:
<
<
META FILEATTACHMENT attachment="logo_researchict.jpg" attr="h" comment="" date="1362133281" name="logo_researchict.jpg" path="logo_researchict.jpg" size="27364" stream="logo_researchict.jpg" tmpFilename="/usr/tmp/CGItemp44228" user="patrick" version="1"
META FILEATTACHMENT attachment="researchict_logo.png" attr="" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="" comment="" date="1368532753" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer.png" attr="" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"
>
>

HPC blog

Warning: Can't find topic HPC.BlogPost

META FILEATTACHMENT attachment="researchict_logo.png" attr="h" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="h" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="h" comment="" date="1368532753" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer.png" attr="h" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"

Revision 302013-05-15 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

Line: 18 to 18
 

HPC blog

Changed:
<
<
to be written
>
>

Warning: Can't find topic HPC.BlogPost

 

General information

Revision 292013-05-15 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

HPCflyer_rasterized.png

The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.

Changed:
<
<
The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. The second part of the wiki contains useful practical information for users of the HPC cluster.
>
>
The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. Take your time and have a look around.

The second part of the wiki contains useful practical information for users of the HPC cluster.

  For a high-resolution image of the flyer go here.

Revision 282013-05-14 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Changed:
<
<
logo_researchict.jpg
>
>
HPCflyer_rasterized.png
 
Changed:
<
<
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage. Research groups can participate by funding the hardware required for their own computational needs, HPC storage capacity can be rented on a per Terabyte, per year basis. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.
>
>
The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht.
 
Changed:
<
<

Topics

>
>
The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. The second part of the wiki contains useful practical information for users of the HPC cluster.
 
Changed:
<
<
>
>
For a high-resolution image of the flyer go here.
 
Changed:
<
<

Contact details

>
>

 
Deleted:
<
<
The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.
 
Changed:
<
<

Participating groups

>
>
 
Changed:
<
<
An overview of the currently participating groups can be found here.
>
>

HPC blog

 
Changed:
<
<

HPC user council

>
>
to be written
 
Changed:
<
<
To steer future directions for the HPC infrastructure, an HPC user council has been setup. More details can be found here.
>
>

General information

 
Changed:
<
<

How to get involved

>
>
The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.

Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster.

HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments.

How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis.
 
Changed:
<
<
to be written ...
>
>

Contact details

The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.

 

HPC infrastructure

Line: 41 to 42
 

No permission to view HPC

META FILEATTACHMENT attachment="logo_researchict.jpg" attr="h" comment="" date="1362133281" name="logo_researchict.jpg" path="logo_researchict.jpg" size="27364" stream="logo_researchict.jpg" tmpFilename="/usr/tmp/CGItemp44228" user="patrick" version="1"
Added:
>
>
META FILEATTACHMENT attachment="researchict_logo.png" attr="" comment="" date="1368529477" name="researchict_logo.png" path="researchict_logo.png" size="154654" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized.png" attr="" comment="" date="1368530857" name="HPCflyer_rasterized.png" path="HPCflyer_rasterized.png" size="196166" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer_rasterized_large.png" attr="" comment="" date="1368532753" name="HPCflyer_rasterized_large.png" path="HPCflyer_rasterized_large.png" size="2781919" user="ksameith" version="1"
META FILEATTACHMENT attachment="HPCflyer.png" attr="" comment="" date="1368546858" name="HPCflyer.png" path="HPCflyer.png" size="2781919" user="ksameith" version="1"

Revision 272013-05-14 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Revision 262013-04-25 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Revision 252013-04-24 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Line: 11 to 11
 

Contact details

Changed:
<
<
The working team HPC is responsible for setting up and maintaining the HPC infrastructure, for details and contact information, go here.
>
>
The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here.
 

Participating groups

Revision 242013-04-24 - KatrinSameith

Line: 1 to 1
Changed:
<
<

Welcome to the High-Performance Computing (HPC) wiki

>
>

Welcome to the High-Performance Computing (HPC) wiki

  logo_researchict.jpg
Line: 20 to 21
  To steer future directions for the HPC infrastructure, an HPC user council has been setup. More details can be found here.
Changed:
<
<

Conditions

>
>

How to get involved

to be written ...
 
Changed:
<
<

Buildup of the HPC cluster

>
>

HPC infrastructure

 
Changed:
<
<
A general overview about the HPC cluster is provided here (password required):
>
>
A general overview about the HPC cluster is provided here (password required):

No permission to view HPC

 

First-time users

Revision 232013-04-12 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Revision 222013-04-02 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Line: 18 to 18
 

HPC user council

Changed:
<
<
To steer future directions for the HPC infrastructure, a HPC user council has been setup. More details can be found here.
>
>
To steer future directions for the HPC infrastructure, an HPC user council has been setup. More details can be found here.
 

Conditions

Revision 212013-03-25 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Changed:
<
<
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 1.5TB working memory) and 160TB of HPC storage. Research groups can participate by funding the hardware required for their own computational needs, HPC storage capacity can be rented on a per Terabyte, per year basis. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.
>
>
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage. Research groups can participate by funding the hardware required for their own computational needs, HPC storage capacity can be rented on a per Terabyte, per year basis. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.
 

Topics

Revision 202013-03-05 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Revision 192013-03-05 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Changed:
<
<
logo_researchict.jpg
>
>
logo_researchict.jpg
 
Changed:
<
<
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 1.5TB working memory) and 160TB of HPC storage. Research groups can participate by funding the hardware required for their own computational needs, HPC storage capacity can be rented on a per Terabyte, per year basis. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.
>
>
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 1.5TB working memory) and 160TB of HPC storage. Research groups can participate by funding the hardware required for their own computational needs, HPC storage capacity can be rented on a per Terabyte, per year basis. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.
 

Topics

Added:
>
>
 

Contact details

Revision 182013-03-01 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

logo_researchict.jpg

Changed:
<
<
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance compute power to all researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 1.5PB working memory) and 160TB of HPC storage.
>
>
The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance computing power to researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 1.5TB working memory) and 160TB of HPC storage. Research groups can participate by funding the hardware required for their own computational needs, HPC storage capacity can be rented on a per Terabyte, per year basis. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.
 

Topics

Contact details

Changed:
<
<
For contact details, go here
>
>
The working team HPC is responsible for setting up and maintaining the HPC infrastructure, for details and contact information, go here.

Participating groups

An overview of the currently participating groups can be found here.

HPC user council

To steer future directions for the HPC infrastructure, a HPC user council has been setup. More details can be found here.

Conditions

 

Buildup of the HPC cluster

A general overview about the HPC cluster is provided here (password required):
Line: 24 to 32
  A useful collection of How to's is provided here (password required):

No permission to view HPC

Changed:
<
<

HPC Web Utilities

META FILEATTACHMENT attachment="logo_researchict.jpg" attr="" comment="" date="1362133281" name="logo_researchict.jpg" path="logo_researchict.jpg" size="27364" stream="logo_researchict.jpg" tmpFilename="/usr/tmp/CGItemp44228" user="patrick" version="1"
>
>
META FILEATTACHMENT attachment="logo_researchict.jpg" attr="h" comment="" date="1362133281" name="logo_researchict.jpg" path="logo_researchict.jpg" size="27364" stream="logo_researchict.jpg" tmpFilename="/usr/tmp/CGItemp44228" user="patrick" version="1"

Revision 172013-03-01 - PatrickKemmeren

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Changed:
<
<
>
>
logo_researchict.jpg

The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program of the UMC Utrecht and aims to provide high-performance compute power to all researchers within the UMC Utrecht. The HPC infrastructure currently consists of 32 compute nodes (384 cores, 1.5PB working memory) and 160TB of HPC storage.

 
Changed:
<
<

>
>

Topics

 

Contact details

Added:
>
>
  For contact details, go here

Buildup of the HPC cluster

Line: 30 to 34
 
Added:
>
>
META FILEATTACHMENT attachment="logo_researchict.jpg" attr="" comment="" date="1362133281" name="logo_researchict.jpg" path="logo_researchict.jpg" size="27364" stream="logo_researchict.jpg" tmpFilename="/usr/tmp/CGItemp44228" user="patrick" version="1"

Revision 152013-02-21 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki


Added:
>
>

Contact details

For contact details, go here
 

Buildup of the HPC cluster

Changed:
<
<
A general overview about the HPC cluster is provided here:
>
>
A general overview about the HPC cluster is provided here (password required):
 

First-time users

Changed:
<
<
To get you started, some initial information is provided here:
>
>
To get you started, some initial information is provided here (password required):
 

No permission to view HPC

How to's

Changed:
<
<
A useful collection of How to's is provided here:
>
>
A useful collection of How to's is provided here (password required):
 

No permission to view HPC

Deleted:
<
<

Contact details

For contact details, go here
 

HPC Web Utilities

Revision 142013-02-21 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Line: 30 to 30
 
Deleted:
<
<
IMG_6201.JPG IMG_6190_-_Kopie.JPG

META FILEATTACHMENT attachment="IMG_6201.JPG" attr="" comment="" date="1360966903" name="IMG_6201.JPG" path="IMG_6201.JPG" size="2407906" stream="IMG_6201.JPG" tmpFilename="/usr/tmp/CGItemp31324" user="ksameith" version="1"
META FILEATTACHMENT attachment="IMG_6190_-_Kopie.JPG" attr="" comment="" date="1360967932" name="IMG_6190_-_Kopie.JPG" path="IMG_6190 - Kopie.JPG" size="319291" stream="IMG_6190 - Kopie.JPG" tmpFilename="/usr/tmp/CGItemp32368" user="ksameith" version="1"

Revision 132013-02-21 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Line: 31 to 31
 
Deleted:
<
<
  • IMG_6201.JPG:
  IMG_6201.JPG
Deleted:
<
<
  • IMG_6190_-_Kopie.JPG:
    IMG_6190_-_Kopie.JPG

  • IMG_6190_-_Kopie.JPG:
  IMG_6190_-_Kopie.JPG

META FILEATTACHMENT attachment="IMG_6201.JPG" attr="" comment="" date="1360966903" name="IMG_6201.JPG" path="IMG_6201.JPG" size="2407906" stream="IMG_6201.JPG" tmpFilename="/usr/tmp/CGItemp31324" user="ksameith" version="1"

Revision 122013-02-15 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Line: 30 to 30
 
Added:
>
>
  • IMG_6201.JPG:
    IMG_6201.JPG

  • IMG_6190_-_Kopie.JPG:
    IMG_6190_-_Kopie.JPG

  • IMG_6190_-_Kopie.JPG:
    IMG_6190_-_Kopie.JPG

META FILEATTACHMENT attachment="IMG_6201.JPG" attr="" comment="" date="1360966903" name="IMG_6201.JPG" path="IMG_6201.JPG" size="2407906" stream="IMG_6201.JPG" tmpFilename="/usr/tmp/CGItemp31324" user="ksameith" version="1"
META FILEATTACHMENT attachment="IMG_6190_-_Kopie.JPG" attr="" comment="" date="1360967932" name="IMG_6190_-_Kopie.JPG" path="IMG_6190 - Kopie.JPG" size="319291" stream="IMG_6190 - Kopie.JPG" tmpFilename="/usr/tmp/CGItemp32368" user="ksameith" version="1"

Revision 112013-02-15 - KatrinSameith

Line: 1 to 1
 

Welcome to the High-Performance Computing (HPC) wiki

Added:
>
>
 
Changed:
<
<
>
>

Buildup of the HPC cluster

A general overview about the HPC cluster is provided here:
 
Changed:
<
<

>
>

First-time users

To get you started, some initial information is provided here:

No permission to view HPC

How to's

A useful collection of How to's is provided here:

No permission to view HPC

Contact details

For contact details, go here
 

HPC Web Utilities

Revision 102013-02-14 - KatrinSameith

Line: 1 to 1
Changed:
<
<

Welcome to the HPC web

>
>

Welcome to the High-Performance Computing (HPC) wiki


 
Deleted:
<
<

Available Information

 
Changed:
<
<
>
>
  • Buildup of the HPC cluster (available servers, directories for storage)
  • Instructions for first-time users (queues, submission, software installation)
 
Changed:
<
<
>
>
  • Frequently asked questions (FAQs)


 

HPC Web Utilities

Revision 92013-02-14 - KatrinSameith

Line: 1 to 1
 

Welcome to the HPC web

Available Information

Changed:
<
<
>
>
 
Deleted:
<
<
 

Revision 82013-02-14 - KatrinSameith

Line: 1 to 1
 

Welcome to the HPC web

Available Information

Added:
>
>
 

Revision 72013-02-07 - KatrinSameith

Line: 1 to 1
 

Welcome to the HPC web

Available Information

Changed:
<
<
  • ...
  • ...
  • ...
>
>
 

HPC Web Utilities

Changed:
<
<
>
>
 
  • WebTopicList - all topics in alphabetical order
  • WebChanges - recent topic changes in this web
  • WebNotify - subscribe to an e-mail alert sent when topics change

Revision 62005-03-28 - TWikiContributor

Line: 1 to 1
 

Welcome to the HPC web

Available Information

Revision 52005-03-28 - TWikiContributor

Line: 1 to 1
Changed:
<
<
Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ...
>
>

Welcome to the HPC web

 
Changed:
<
<
>
>

Available Information

  • ...
  • ...
  • ...
 
Changed:
<
<

Site Tools of the HPC Web

>
>

HPC Web Utilities

 
Deleted:
<
<

Notes:

  • You are currently in the HPC web. The color code for this web is this background, so you know where you are.
  • If you are not familiar with the TWiki collaboration platform, please visit WelcomeGuest first.

Web Description Links
TWiki documentation, welcome guest and user registration
Search Changes Notification Statistics Preferences
HPC
Documentation related to the High-Performance Computing infrastructure
Search Changes Notification Statistics Preferences
TIP Webs are color-coded for identification and reference. Contact bioinf-holstege@lists.umcutrecht.nl if you need a workspace web for your team.

Legend: Search topic Search the web Statistics Usage statistics of the web
Recent changes See recent changes in the web Wrench, tools Web-specific preferences
Notify Subscribe to get notified of changes by e-mail  

Revision 42002-04-14 - PeterThoeny

Line: 1 to 1
 Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ...

Changed:
<
<

Maintenance of the HPC web

  •    (More options in WebSearch)
  • WebChanges: Find out recent modifications to the TWiki.HPC web.
  • WebIndex: Display all TWiki.HPC topics in alphabetical order. See also the faster WebTopicList
  • WebNotify: Subscribe to be automatically notified when something changes in the TWiki.HPC web.
  • WebStatistics: View access statistics of the TWiki.HPC web.
  • WebPreferences: Preferences of the TWiki.HPC web.
>
>

Site Tools of the HPC Web

  Notes:
Changed:
<
<
  • You are currently in the TWiki.HPC web. The color code for this web is a (SPECIFY COLOR) background, so you know where you are.
  • If you are not familiar with the TWiki collaboration tool, please visit WelcomeGuest in the TWiki.TWiki web first.
>
>
  • You are currently in the HPC web. The color code for this web is this background, so you know where you are.
  • If you are not familiar with the TWiki collaboration platform, please visit WelcomeGuest first.
 
Web Description Links
TWiki documentation, welcome guest and user registration
Search Changes Notification Statistics Preferences
HPC
Documentation related to the High-Performance Computing infrastructure
Search Changes Notification Statistics Preferences
TIP Webs are color-coded for identification and reference. Contact bioinf-holstege@lists.umcutrecht.nl if you need a workspace web for your team.

Legend: Search topic Search the web Statistics Usage statistics of the web
Recent changes See recent changes in the web Wrench, tools Web-specific preferences
Notify Subscribe to get notified of changes by e-mail  

Revision 32002-04-07 - PeterThoeny

Line: 1 to 1
 Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ...

Line: 18 to 18
 
  • You are currently in the TWiki.HPC web. The color code for this web is a (SPECIFY COLOR) background, so you know where you are.
  • If you are not familiar with the TWiki collaboration tool, please visit WelcomeGuest in the TWiki.TWiki web first.
Changed:
<
<

Warning: Can't find topic TWiki.TWikiWebsTable

>
>
Web Description Links
TWiki documentation, welcome guest and user registration
Search Changes Notification Statistics Preferences
HPC
Documentation related to the High-Performance Computing infrastructure
Search Changes Notification Statistics Preferences
TIP Webs are color-coded for identification and reference. Contact bioinf-holstege@lists.umcutrecht.nl if you need a workspace web for your team.

Legend: Search topic Search the web Statistics Usage statistics of the web
Recent changes See recent changes in the web Wrench, tools Web-specific preferences
Notify Subscribe to get notified of changes by e-mail  

Revision 22001-11-24 - PeterThoeny

Line: 1 to 1
 Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ...

Line: 8 to 8
 
  •    (More options in WebSearch)
  • WebChanges: Find out recent modifications to the TWiki.HPC web.
Changed:
<
<
  • WebIndex: Display all TWiki.HPC topics in alphabetical order.
>
>
 
  • WebNotify: Subscribe to be automatically notified when something changes in the TWiki.HPC web.
  • WebStatistics: View access statistics of the TWiki.HPC web.
  • WebPreferences: Preferences of the TWiki.HPC web.

Revision 12001-08-08 - PeterThoeny

Line: 1 to 1
Added:
>
>
Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ...

Maintenance of the HPC web

  •    (More options in WebSearch)
  • WebChanges: Find out recent modifications to the TWiki.HPC web.
  • WebIndex: Display all TWiki.HPC topics in alphabetical order.
  • WebNotify: Subscribe to be automatically notified when something changes in the TWiki.HPC web.
  • WebStatistics: View access statistics of the TWiki.HPC web.
  • WebPreferences: Preferences of the TWiki.HPC web.

Notes:

  • You are currently in the TWiki.HPC web. The color code for this web is a (SPECIFY COLOR) background, so you know where you are.
  • If you are not familiar with the TWiki collaboration tool, please visit WelcomeGuest in the TWiki.TWiki web first.

Warning: Can't find topic TWiki.TWikiWebsTable

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback