24 August 2022 – Supercomputing The New Supercomputer Helix Goes Live
The time has come: The new Heidelberg high performance computer “bwForCluster Helix” is now in operation. It is available as a state service to all researchers at Baden-Württemberg universities, colleges and research institutes in the bwIDM network.
Helix is the successor system to “bwForCluster MLS&WISO” and will be operated by the Future IT – Research & Education (FIRE) service area at the University Computing Centre (URZ). The 700 currently active users of “bwForCluster MLS&WISO” can immediate start working with Helix – and new users are welcome!
With 20,000 processor cores and around 200 GPUs, Helix is significantly more powerful than its predecessor. On Helix, researchers will be able to work across systems due to a direct connection to the SDS@hd state service for the storage of research data, a service also operated at the URZ. This makes generating, analyzing and storing large amounts of research data particularly efficient.
Future-proof performance for cutting-edge research
Helix is to be used primarily for data- and compute-intensive research in the life sciences, natural sciences and computational humanities. Specific areas of application are research projects in structural system biology, medicine, materials science and computational humanities.
“With Helix, we can provide a future-proof, high performance platform for research in Heidelberg and throughout the state,” explains Prof. Dr. Vincent Heuveline, Executive Director of the URZ and CIO of Heidelberg University. “The new high performance computer is an essential and powerful tool for cutting-edge research and will help us better understand and develop solutions to the greatest challenges facing us today.”
Successful installation despite difficult conditions
“In times of material shortages and interrupted supply chains, setting up Helix was not only a technical challenge, but also a logistical one - we began the process in the middle of the pandemic, after all,” notes Dr. Martin Baumann, head of the URZ service division FIRE. “That makes us all the more thrilled that we can now provide our researchers with a state-of-the-art high performance computer that is precisely tailored to the needs of scientific research.”
Chiefly responsible for the planning and installation of the service area was the HPC team from the Data-Intensive Computing service group, led by Dr. Sabine Richling. “Our team was undeterred by the frequently long wait times. We maintained close contact with the manufacturers in order to reach our goal as quickly as possible,” explains Richling. “Ultimately, we were able to make good use of the additional time to optimize the configuration of the cluster and prepare it for the start of operations. And now we are excited to support researchers in using Helix!”
The URZ's HPC team, in conjunction with University IT Mannheim, operates one of several statewide competence centers for high performance computing. These centers provide consulting and technical support to users engaged in HPC-based research projects.
Helix: A building block of the state concept of bwHPC
Helix received a total of approximately five million euros in funding from the German Research Foundation and the Baden-Württemberg Ministry of Science, Research and Art, as well as from university funds. The supercomputer is part of the bwHPC concept of the state of Baden-Württemberg, which provides high performance computers at five university locations for research in various disciplines as well as for basic services and teaching.
bwForCluster MLS&WISO will be operated in parallel until 23.09.22.
In order to make the transition to Helix as smooth as possible for researchers and to avoid interruptions to ongoing projects, the previous high performance computer “bwForCluster MLS&WISO” will continue to operate in parallel until 23 September 2022.
Interested in using the new high performance computer for your research project? You can find more information about Helix in the linked Service Catalogue entry.
Helix hardware at a glance
- 20,000 AMD EPYC Milan processor cores
- ca. 100 terabytes of main memory
- ca. 200 NVIDIA Ampere Tensor Core GPUs (A100 and A40)
- Non-blocking NVIDIA Mellanox InfiniBand HDR (at least 100 GBit/s per computing node)
- High performance memory with a parallel IBM Spectrum Scale file system:
total capacity of ca. 11 petabytes, flash storage of ca. 800 terabytes