Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 24 to 24 | ||||||||
The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport. | ||||||||
Deleted: | ||||||||
< < | Price list1 CPU share (€ 1200) : ~50.000 CPU hrs1 GPU share (€ 1200): ~5.000 GPU hrs (includes 6 CPUs) 1 TB non-redundant high-performance storage (€ 180/TB/year) 1 TB non-redundant low-performance/archive storage (€45/TB/year) | |||||||
Contact detailsThe HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 24 to 24 | ||||||||
The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport. | ||||||||
Added: | ||||||||
> > | Price list1 CPU share (€ 1200) : ~50.000 CPU hrs1 GPU share (€ 1200): ~5.000 GPU hrs (includes 6 CPUs) 1 TB non-redundant high-performance storage (€ 180/TB/year) 1 TB non-redundant low-performance/archive storage (€45/TB/year) | |||||||
Contact detailsThe HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 49 to 49 | ||||||||
A general overview about the HPC cluster is provided here (password required): No permission to view HPC | ||||||||
Added: | ||||||||
> > |
| |||||||
| ||||||||
Added: | ||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 13 to 13 | ||||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-eigth research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-nine research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 13 to 13 | ||||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-seven research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-eigth research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 13 to 13 | ||||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-six research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-seven research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 28 to 28 | ||||||||
The HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | ||||||||
Deleted: | ||||||||
< < | HPC infrastructureA general overview about the HPC cluster is provided here (password required): No permission to view HPC | |||||||
First-time usersTo get you started, some initial information is provided here (password required): | ||||||||
Line: 49 to 44 | ||||||||
No permission to view HPC | ||||||||
Added: | ||||||||
> > | HPC infrastructureA general overview about the HPC cluster is provided here (password required): No permission to view HPC | |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Changed: | ||||||||
< < | The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht and the Hubrecht Institute. | |||||||
> > | The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht, Hubrecht Institute and Princess Máxima Center for Pediatric Oncology. | |||||||
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)
General information | ||||||||
Changed: | ||||||||
< < | The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel. | |||||||
> > | The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel. | |||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-four research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-six research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 13 to 13 | ||||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-three research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-four research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 13 to 13 | ||||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-two research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-three research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) Facility wiki | ||||||||
Line: 13 to 13 | ||||||||
Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-two research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. | |||||||
> > | Participating groups. Currently, twenty-two research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. | |||||||
How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
Conditions and Support | ||||||||
Line: 24 to 26 | ||||||||
Contact details | ||||||||
Changed: | ||||||||
< < | The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | |||||||
> > | The HPC team is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | |||||||
HPC infrastructure |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | Welcome to the High-Performance Computing (HPC) wiki | |||||||
> > | Welcome to the High-Performance Computing (HPC) Facility wiki | |||||||
Changed: | ||||||||
< < | ||||||||
> > | ||||||||
Changed: | ||||||||
< < | The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
> > | The High-performance Computing (HPC) facility is setup to provide high-performance computing power to all life science researchers at Utrecht Science Park. Coordinated by the Utrecht Bioinformatics Center and subsidized by Utrecht University and University Medical Center, it currently provides computational power to over twenty different research groups located within Utrecht University, UMC Utrecht and the Hubrecht Institute. | |||||||
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 61 compute nodes (770 cores, 8TB working memory) and 220/490TB of HPC storage. Funded and used by UMC (6 divisions) and various groups at Utrecht University and the Hubrecht Laboratory), totalling 19 research groups. | |||||||
> > | The HPC facility consists of 1200 cores, 10TB working memory and 490TB of High-Performance storage. The HPC facility runs on CentOS Linux and provides a batch-wise queueing system with a few head nodes and many compute nodes for submitting and running many computational tasks in parallel. | |||||||
Changed: | ||||||||
< < | A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | |||||||
> > | Dedicated administrators maintain and develop the HPC infrastructure and provide support to end users. These positions are funded by UMC Utrecht and Utrecht University (ITS). | |||||||
Participating groups. Currently, twenty-two research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. | ||||||||
Line: 52 to 48 | ||||||||
No permission to view HPC
| ||||||||
Added: | ||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 17 to 17 | ||||||||
A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, twenty-one research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-two research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 17 to 17 | ||||||||
A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | * Participating groups.* Currently, nineteen research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, twenty-one research groups are actively using the HPC cluster. | |||||||
* HPC user council.* To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Deleted: | ||||||||
< < | ||||||||
The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | ||||||||
Line: 12 to 11 | ||||||||
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left) | ||||||||
Deleted: | ||||||||
< < | ||||||||
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 60 compute nodes (720 cores, 7.5TB working memory) and 260TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | |||||||
> > | The HPC cluster currently consists of 61 compute nodes (770 cores, 8TB working memory) and 220/490TB of HPC storage. Funded and used by UMC (6 divisions) and various groups at Utrecht University and the Hubrecht Laboratory), totalling 19 research groups.
A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | |||||||
Participating groups. Currently, nineteen research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. | ||||||||
Changed: | ||||||||
< < | How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. | |||||||
> > | How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. | |||||||
Conditions and Support | ||||||||
Added: | ||||||||
> > | ||||||||
The conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Added: | ||||||||
> > | ||||||||
The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 38 to 38 | ||||||||
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, seventeen research groups are actively using the HPC cluster. | |||||||
> > | The HPC cluster currently consists of 60 compute nodes (720 cores, 7.5TB working memory) and 260TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, nineteen research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 39 to 39 | ||||||||
General informationThe HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, sixteen research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, seventeen research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 17 to 17 | ||||||||
Thursday, 13th Feb 2014 | ||||||||
Changed: | ||||||||
< < | Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100.000. | |||||||
> > | Inspired by a slight mishap yesterday, we'll be limiting the maximum number of jobs any user can queue simultaneously to 100,000. | |||||||
Thursday, 6th Feb 2014 | ||||||||
Line: 39 to 39 | ||||||||
General informationThe HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, fifteen research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, sixteen research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 43 to 43 | ||||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. | ||||||||
Added: | ||||||||
> > | Conditions and SupportThe conditions that apply when using the HPC infrastructure and the level of support that we are able to provide can be found at ConditionsAndSupport. | |||||||
Contact detailsThe working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | ||||||||
Line: 68 to 72 | ||||||||
No permission to view HPC | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 35 to 35 | ||||||||
If you notice anything different or unusual, please notify us. | ||||||||
Deleted: | ||||||||
< < | Wednesday, 8th Jan 2014 The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB). Thursday, 17th Oct 2013 Yesterday evening, we performed some maintenance. Among others, the network connection of the first submit host (hpcs01.op.umcutrecht.nl) was upgraded. It is now the same as its sister (hpcs02): two gigabits/s, both to the storage, and to the rest of the network. The machine has a new IP address as well: 143.121.195.5; your ssh-client may notice this change. A somewhat older change: on both submit hosts, the memory limits for interactive work have been relaxed: you can now use 10GB ram, plus 2GB swapspace. Thursday, 30th May 2013 To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here). | |||||||
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, fourteen research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. | |||||||
> > | The HPC cluster currently consists of 56 compute nodes (672 cores, 7TB working memory) and 160TB of HPC storage. A dedicated Linux administrator is funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, fifteen research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet four times a year to discuss the usage of the HPC cluster as well as new developments. | |||||||
How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested.
Contact details |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 15 to 15 | ||||||||
News | ||||||||
Added: | ||||||||
> > | Thursday, 6th Feb 2014
The number of slots per queue has been adjusted, like discussed on the HPC Usercouncil meeting. The new settings are:
| |||||||
Wednesday, 8th Jan 2014 The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB). |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 15 to 15 | ||||||||
News | ||||||||
Added: | ||||||||
> > | Wednesday, 8th Jan 2014 The memory limit of individual slots for the different queues has been increased to 15GB (was 10GB). | |||||||
Thursday, 17th Oct 2013 Yesterday evening, we performed some maintenance. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 30 to 30 | ||||||||
General informationThe HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, thirteen research groups are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, fourteen research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 29 to 29 | ||||||||
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | |||||||
> > | The HPC cluster currently consists of 46 compute nodes (552 cores, 5.75TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | |||||||
Participating groups. Currently, thirteen research groups are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 30 to 30 | ||||||||
General informationThe HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, eleven research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, thirteen research groups are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 30 to 30 | ||||||||
General informationThe HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, ten research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, eleven research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 15 to 15 | ||||||||
News | ||||||||
Changed: | ||||||||
< < | Thursday, 30th May 2013 | |||||||
> > | Thursday, 17th Oct 2013 | |||||||
Changed: | ||||||||
< < | To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here). | |||||||
> > | Yesterday evening, we performed some maintenance. | |||||||
Changed: | ||||||||
< < | Monday, 27th May 2013 | |||||||
> > | Among others, the network connection of the first submit host (hpcs01.op.umcutrecht.nl) was upgraded. It is now the same as its sister (hpcs02): two gigabits/s, both to the storage, and to the rest of the network. The machine has a new IP address as well: 143.121.195.5; your ssh-client may notice this change. | |||||||
Changed: | ||||||||
< < | Compute node 25 is back online! After a period of repeated hardware failures, we're happy to report that the server has been completely replaced and is up and running again. | |||||||
> > | A somewhat older change: on both submit hosts, the memory limits for interactive work have been relaxed: you can now use 10GB ram, plus 2GB swapspace. | |||||||
Changed: | ||||||||
< < | Thursday, 16th May 2013 | |||||||
> > | Thursday, 30th May 2013 | |||||||
Changed: | ||||||||
< < | As dicussed in the user council meeting of this afternoon, we made the following changes. The number of slots available for the veryshort queue is 12 per compute node, short 10, medium 8, long 4, verylong 1. In addition, all compute nodes are now able to submit jobs. | |||||||
> > | To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here). | |||||||
General information |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 30 to 30 | ||||||||
General informationThe HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. | ||||||||
Changed: | ||||||||
< < | Participating groups. Currently, nine research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
> > | Participating groups. Currently, ten research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | |||||||||
---|---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | |||||||||
Line: 59 to 59 | |||||||||
No permission to view HPC | |||||||||
Changed: | |||||||||
< < |
| ||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 29 to 29 | ||||||||
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
> > | The HPC cluster currently consists of 42 compute nodes (504 cores, 5.25TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, nine research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. | |||||||
HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 29 to 29 | ||||||||
General information | ||||||||
Changed: | ||||||||
< < | The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis. | |||||||
> > | The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, a HPC user council has been setup. We aim to meet every two months to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups can participate by funding the hardware required for their own computational needs. HPC storage capacity can be rented on a per Terabyte, per year basis. For testing purposes and/or trying out the HPC infrastructure, free trial accounts can also be arranged that have (limited) access to the HPC resources, contact us if you are interested. | |||||||
Contact details |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 52 to 52 | ||||||||
Software installation | ||||||||
Changed: | ||||||||
< < | The working team HPC installs and maintains software that is of general interest to HPC users. In addition, everybody may install user- or group-specific software. For more details, see here. | |||||||
> > | The HPC infrastructure provides the basis to install any software that is needed by users. More details are provided here (password required). | |||||||
No permission to view HPC
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 50 to 50 | ||||||||
A useful collection of How to's is provided here (password required): No permission to view HPC | ||||||||
Added: | ||||||||
> > | Software installationThe working team HPC installs and maintains software that is of general interest to HPC users. In addition, everybody may install user- or group-specific software. For more details, see here. No permission to view HPC | |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 17 to 17 | ||||||||
Thursday, 30th May 2013 | ||||||||
Changed: | ||||||||
< < | A second submit host is online! Go to add for more information. | |||||||
> > | To further facilitate basic interactive usage of the HPC cluster, we installed a second login/submission server (see here). | |||||||
Monday, 27th May 2013 |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 12 to 12 | ||||||||
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)
| ||||||||
Added: | ||||||||
> > | ||||||||
News | ||||||||
Added: | ||||||||
> > | Thursday, 30th May 2013 A second submit host is online! Go to add for more information. | |||||||
Monday, 27th May 2013 Compute node 25 is back online! After a period of repeated hardware failures, we're happy to report that the server has been completely replaced and is up and running again. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 9 to 9 | ||||||||
The second part of the wiki contains useful practical information for users of the HPC cluster. | ||||||||
Changed: | ||||||||
< < | (for a high-resolution version of the HPC flyer, click on the thumbnail on the left) | |||||||
> > | (for a high-resolution version of the HPC flyer, click on the thumbnail on the left) | |||||||
Added: | ||||||||
> > | ||||||||
News | ||||||||
Changed: | ||||||||
< < | Thursday, 16th May 2013 | |||||||
> > | Monday, 27th May 2013 Compute node 25 is back online! After a period of repeated hardware failures, we're happy to report that the server has been completely replaced and is up and running again. Thursday, 16th May 2013 | |||||||
As dicussed in the user council meeting of this afternoon, we made the following changes. The number of slots available for the veryshort queue is 12 per compute node, short 10, medium 8, long 4, verylong 1. In addition, all compute nodes are now able to submit jobs.
General information |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 7 to 7 | ||||||||
The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. Take your time and have a look around. | ||||||||
Changed: | ||||||||
< < | The second part of the wiki contains useful practical information for users of the HPC cluster. | |||||||
> > | The second part of the wiki contains useful practical information for users of the HPC cluster.
(for a high-resolution version of the HPC flyer, click on the thumbnail on the left)NewsThursday, 16th May 2013As dicussed in the user council meeting of this afternoon, we made the following changes. The number of slots available for the veryshort queue is 12 per compute node, short 10, medium 8, long 4, verylong 1. In addition, all compute nodes are now able to submit jobs. | |||||||
General information |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Changed: | ||||||||
< < | ![]() | |||||||
> > | ||||||||
The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | ||||||||
Changed: | ||||||||
< < | The second part of the wiki contains useful practical information for users of the HPC cluster.
For a high-resolution image of the flyer go here. | |||||||
> > | The second part of the wiki contains useful practical information for users of the HPC cluster. | |||||||
General information |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 11 to 11 | ||||||||
For a high-resolution image of the flyer go here. | ||||||||
Added: | ||||||||
> > | ||||||||
General informationThe HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 11 to 11 | ||||||||
For a high-resolution image of the flyer go here. | ||||||||
Deleted: | ||||||||
< < | %THUMBVIEW{ "HPCflyer_rasterized_large.png" }% %THUMBVIEW{ "isilon_performance1.png" }% | |||||||
General informationThe HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis. | ||||||||
Line: 42 to 38 | ||||||||
| ||||||||
Deleted: | ||||||||
< < |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 13 to 13 | ||||||||
%THUMBVIEW{ "HPCflyer_rasterized_large.png" }% | ||||||||
Added: | ||||||||
> > | %THUMBVIEW{ "isilon_performance1.png" }% | |||||||
General informationThe HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis. | ||||||||
Line: 40 to 42 | ||||||||
| ||||||||
Added: | ||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 9 to 9 | ||||||||
The second part of the wiki contains useful practical information for users of the HPC cluster. | ||||||||
Changed: | ||||||||
< < | For a high-resolution image of the flyer go here. | |||||||
> > | For a high-resolution image of the flyer go here. %THUMBVIEW{ "HPCflyer_rasterized_large.png" }% | |||||||
General information | ||||||||
Line: 36 to 38 | ||||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Changed: | ||||||||
< < | ![]() | |||||||
> > | ![]() | |||||||
The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | ||||||||
Line: 9 to 9 | ||||||||
The second part of the wiki contains useful practical information for users of the HPC cluster. | ||||||||
Changed: | ||||||||
< < | For a high-resolution image of the flyer go here.
| |||||||
> > | For a high-resolution image of the flyer go here. | |||||||
General information | ||||||||
Line: 36 to 34 | ||||||||
A useful collection of How to's is provided here (password required): No permission to view HPC | ||||||||
Deleted: | ||||||||
< < | HPC blogWarning: Can't find topic HPC.BlogPost | |||||||
|
Line: 1 to 1 | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | |||||||||||
Line: 13 to 13 | |||||||||||
| |||||||||||
Deleted: | |||||||||||
< < |
| ||||||||||
General informationThe HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity.Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis. | |||||||||||
Line: 47 to 36 | |||||||||||
A useful collection of How to's is provided here (password required): No permission to view HPC | |||||||||||
Changed: | |||||||||||
< < |
| ||||||||||
> > | HPC blogWarning: Can't find topic HPC.BlogPost
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 18 to 18 | ||||||||
HPC blog | ||||||||
Changed: | ||||||||
< < | to be written | |||||||
> > | Warning: Can't find topic HPC.BlogPost | |||||||
General information |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() ![]() ![]() | ||||||||
Changed: | ||||||||
< < | The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. The second part of the wiki contains useful practical information for users of the HPC cluster. | |||||||
> > | The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. Take your time and have a look around. The second part of the wiki contains useful practical information for users of the HPC cluster. | |||||||
For a high-resolution image of the flyer go here. |
Line: 1 to 1 | |||||||||
---|---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | |||||||||
Changed: | |||||||||
< < | ![]() | ||||||||
> > | ![]() | ||||||||
Changed: | |||||||||
< < | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | ||||||||
> > | The High-performance Computing (HPC) cluster is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | ||||||||
Changed: | |||||||||
< < | Topics | ||||||||
> > | The first part of the wiki introduces the HPC cluster to those of you who may be interested to join. The second part of the wiki contains useful practical information for users of the HPC cluster. | ||||||||
Changed: | |||||||||
< < | |||||||||
> > | For a high-resolution image of the flyer go here. | ||||||||
Changed: | |||||||||
< < | Contact details | ||||||||
> > | |||||||||
Deleted: | |||||||||
< < | The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | ||||||||
Changed: | |||||||||
< < | Participating groups | ||||||||
> > | |||||||||
Changed: | |||||||||
< < | An overview of the currently participating groups can be found here. | ||||||||
> > | HPC blog | ||||||||
Changed: | |||||||||
< < | HPC user council | ||||||||
> > | to be written | ||||||||
Changed: | |||||||||
< < | To steer future directions for the HPC infrastructure, an HPC user council has been setup. More details can be found here. | ||||||||
> > | General information | ||||||||
Changed: | |||||||||
< < | How to get involved | ||||||||
> > | The HPC cluster currently consists of 32 compute nodes (384 cores, 4TB working memory) and 160TB of HPC storage, and will grow further in the near future. A dedicated Linux administrator and part-time bioinformatician are funded by the research ICT program. In addition, the program subsidizes the HPC storage capacity. Participating groups. Currently, six research groups from different divisions in the UMC Utrecht are actively using the HPC cluster. HPC user council. To steer future directions of the HPC infrastructure together with the participating research groups, an HPC user council has been setup. We aim to meet every other month to discuss the usage of the HPC cluster as well as new developments. How to get involved. Research groups participate by funding the hardware required for their own computational needs. In addition, HPC storage capacity can be rented on a per Terabyte, per year basis. | ||||||||
Changed: | |||||||||
< < | to be written ... | ||||||||
> > | Contact detailsThe working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | ||||||||
HPC infrastructure | |||||||||
Line: 41 to 42 | |||||||||
No permission to view HPC
| |||||||||
Added: | |||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 11 to 11 | ||||||||
Contact details | ||||||||
Changed: | ||||||||
< < | The working team HPC is responsible for setting up and maintaining the HPC infrastructure, for details and contact information, go here. | |||||||
> > | The working team HPC is responsible for setting up and maintaining the HPC infrastructure, as well as for helping out with HPC related user questions. For details and contact information, go here. | |||||||
Participating groups |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | Welcome to the High-Performance Computing (HPC) wiki | |||||||
> > | Welcome to the High-Performance Computing (HPC) wiki | |||||||
![]() | ||||||||
Line: 20 to 21 | ||||||||
To steer future directions for the HPC infrastructure, an HPC user council has been setup. More details can be found here. | ||||||||
Changed: | ||||||||
< < | Conditions | |||||||
> > | How to get involvedto be written ... | |||||||
Changed: | ||||||||
< < | Buildup of the HPC cluster | |||||||
> > | HPC infrastructure | |||||||
Changed: | ||||||||
< < | A general overview about the HPC cluster is provided here (password required): | |||||||
> > | A general overview about the HPC cluster is provided here (password required): No permission to view HPC | |||||||
First-time users |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Line: 18 to 18 | ||||||||
HPC user council | ||||||||
Changed: | ||||||||
< < | To steer future directions for the HPC infrastructure, a HPC user council has been setup. More details can be found here. | |||||||
> > | To steer future directions for the HPC infrastructure, an HPC user council has been setup. More details can be found here. | |||||||
Conditions |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Changed: | ||||||||
< < | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
> > | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
Topics |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Changed: | ||||||||
< < | ![]() | |||||||
> > | ![]() | |||||||
Changed: | ||||||||
< < | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
> > | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
Topics | ||||||||
Added: | ||||||||
> > | ||||||||
Contact details |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki![]() | ||||||||
Changed: | ||||||||
< < | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
> > | The High-performance Computing (HPC) infrastructure is part of workpackage 5 (infrastructure) of the Research ICT program![]() ![]() | |||||||
TopicsContact details | ||||||||
Changed: | ||||||||
< < | For contact details, go here | |||||||
> > | The working team HPC is responsible for setting up and maintaining the HPC infrastructure, for details and contact information, go here.
Participating groupsAn overview of the currently participating groups can be found here.HPC user councilTo steer future directions for the HPC infrastructure, a HPC user council has been setup. More details can be found here.Conditions | |||||||
Buildup of the HPC clusterA general overview about the HPC cluster is provided here (password required): | ||||||||
Line: 24 to 32 | ||||||||
A useful collection of How to's is provided here (password required): No permission to view HPC | ||||||||
Changed: | ||||||||
< < | HPC Web Utilities
| |||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Changed: | ||||||||
< < | ||||||||
> > | ![]() ![]() ![]() | |||||||
Changed: | ||||||||
< < | ||||||||
> > | Topics | |||||||
Contact details | ||||||||
Added: | ||||||||
> > | ||||||||
For contact details, go here
Buildup of the HPC cluster | ||||||||
Line: 30 to 34 | ||||||||
| ||||||||
Added: | ||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Added: | ||||||||
> > | Contact detailsFor contact details, go here | |||||||
Buildup of the HPC cluster | ||||||||
Changed: | ||||||||
< < | A general overview about the HPC cluster is provided here: | |||||||
> > | A general overview about the HPC cluster is provided here (password required): | |||||||
First-time users | ||||||||
Changed: | ||||||||
< < | To get you started, some initial information is provided here: | |||||||
> > | To get you started, some initial information is provided here (password required): | |||||||
No permission to view HPC
How to's | ||||||||
Changed: | ||||||||
< < | A useful collection of How to's is provided here: | |||||||
> > | A useful collection of How to's is provided here (password required): | |||||||
No permission to view HPC | ||||||||
Deleted: | ||||||||
< < | Contact detailsFor contact details, go here | |||||||
HPC Web Utilities |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 30 to 30 | ||||||||
| ||||||||
Deleted: | ||||||||
< < |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 31 to 31 | ||||||||
| ||||||||
Deleted: | ||||||||
< < |
| |||||||
| ||||||||
Deleted: | ||||||||
< < |
| |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Line: 30 to 30 | ||||||||
| ||||||||
Added: | ||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the High-Performance Computing (HPC) wiki | ||||||||
Added: | ||||||||
> > | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | Buildup of the HPC clusterA general overview about the HPC cluster is provided here: | |||||||
Changed: | ||||||||
< < | ||||||||
> > | First-time usersTo get you started, some initial information is provided here: No permission to view HPCHow to'sA useful collection of How to's is provided here: No permission to view HPCContact detailsFor contact details, go here | |||||||
HPC Web Utilities |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | Welcome to the HPC web | |||||||
> > | Welcome to the High-Performance Computing (HPC) wiki | |||||||
Deleted: | ||||||||
< < | Available Information | |||||||
Changed: | ||||||||
< < | ||||||||
> > |
| |||||||
Changed: | ||||||||
< < | ||||||||
> > |
| |||||||
HPC Web Utilities |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the HPC webAvailable Information | ||||||||
Changed: | ||||||||
< < | ||||||||
> > | ||||||||
Deleted: | ||||||||
< < |
| |||||||
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the HPC webAvailable Information | ||||||||
Added: | ||||||||
> > | ||||||||
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the HPC webAvailable Information | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | ||||||||
HPC Web Utilities | ||||||||
Changed: | ||||||||
< < | ||||||||
> > | ||||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the HPC webAvailable Information |
Line: 1 to 1 | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Changed: | ||||||||||||||||||||
< < | Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ... | |||||||||||||||||||
> > | Welcome to the HPC web | |||||||||||||||||||
Changed: | ||||||||||||||||||||
< < | ||||||||||||||||||||
> > | Available Information
| |||||||||||||||||||
Changed: | ||||||||||||||||||||
< < | Site Tools of the HPC Web | |||||||||||||||||||
> > | HPC Web Utilities | |||||||||||||||||||
Deleted: | ||||||||||||||||||||
< < |
Notes:
|
Line: 1 to 1 | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ... | ||||||||||||||||||||
Changed: | ||||||||||||||||||||
< < | Maintenance of the HPC web | |||||||||||||||||||
> > | Site Tools of the HPC Web | |||||||||||||||||||
Notes: | ||||||||||||||||||||
Changed: | ||||||||||||||||||||
< < |
| |||||||||||||||||||
> > |
| |||||||||||||||||||
|
Line: 1 to 1 | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ... | ||||||||||||||||||||
Line: 18 to 18 | ||||||||||||||||||||
| ||||||||||||||||||||
Changed: | ||||||||||||||||||||
< < | Warning: Can't find topic TWiki.TWikiWebsTable | |||||||||||||||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ... | ||||||||
Line: 8 to 8 | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > | Welcome to the home of TWiki.HPC. This is a web-based collaboration area for ...
Maintenance of the HPC webNotes:
|