NETWORKS
BlueBEAR providing analysis inside less time
- Written by: chris oneal
- Category: NETWORKS
Researchers across the University of Birmingham employing a centrally funded supercomputing service will now benefit from additional modeling power plus a wider range of services carrying out a complete replacement of their previous system, which was installed in 2007. The city of birmingham also provides additional supercompute power to GridPP, a collaboration of particle physicists and computer scientists from the UK and CERN.
The initial service boasts 15 TFlop/s performance using nearly 850 cores [Intel’s Sandy-Bridge eight-core processors]. This initial installation is comparable performance, with a far lower carbon footprint, to the outgoing system all of which will grow in the light of the IT services' experience to satisfy the varied demands from the research community. Unlike many upgrades, the University of Birmingham has released recurrent capital for this service so that it can satisfy the needs of both recent and new users.
The server clusters makes use of IBM System x iDataPlex with Intel Sandy Bridge processors. OCF has installed more good performance server clusters using IBM iDataPlex servers than any another UK integrator.
Usage of Mellanox’s Virtual Protocol Interconnect (VPI) cards within the cluster design makes it easier for IT Services to redeploy nodes between various components of the BEAR services determined by changing workloads.
The Linux-based service is one part of the overall Birmingham Environment for Academic Research (BEAR) that is being jointly developed by the University, OCF and other specialist partners. BEAR is a set of complimentary and inter-linked services to meet up with the diverse needs in the wide research base in the University. Other components of this service include:
- A Windows based service to create the power of supercomputing to users regarding Windows applications with no need to become familiar with using the low-level Linux and job submission commands which can be associated with traditional Linux supercomputer.
- A GPGPU service for applications that benefit from GPU accelerator technology.
- A large memory service for needs which can be primarily data intensive rather than compute intensive, whilst understanding the two needs are rarely completely distinct.
- A sophisticated visualisation center, incorporating productive stereo display and action tracking.
- highly scalable collaborative conferencing and collaborative visualisation services, which are especially beneficial to large and quite often international research groups.
- A dedicated render farm based on a number of CPU-only and GPU-assisted nodes, primarily for demanding off-line rendering, is planned for release at a later phase of the developments in the BEAR environment.
The extra modeling power will permit researchers to process greater, more detailed, more accurate simulations and test cases in a smaller amount time than was possible with the previous service. Within the Department of Chemistry, Professor Roy Johnston and his team, one in the supercomputers major users, are employing the Linux supercomputer for research into numerous areas including computational nanoscience. Professor Johnston’s team is trying to discover how to create more cost effective and much more environmentally friendly catalysts with regard to fuel cells and hydrogen automobiles, for example.
Paul Hatton, supercomputer & visualisation expert, IT Services, University of Birmingham, said: "The new Linux HPC service has been well received by users of the previous service, and most of the ongoing projects that were benefitting from the previous service, including some from archaeology and economics as well as the science and engineering disciplines, have continued to use the new service. The next, and in many ways more difficult, challenge is to widen the user base to those research areas that do not traditionally make use of a Linux HPC service, which was the motivation for including other services, especially the Windows HPC service, in the wider BEAR environment."
The University funded the core BEAR service whilst other researchers have added resources using their research grants, benefitting from system management offered by IT services. Designed, built and integrated by OCF, the cluster makes use of IBM hardware and software. As part of a wider collaboration, OCF has also provided staff resources to assist the University with outreach assignments to recruit new customers onto the service.
Robert Hatton continued: “OCF has provided expert consultancy to support the design of BlueBEAR II. It has built flexible, scalable and unobtrusive high performance server clusters powered by both CPUs and GPU processors, using Linux and Windows, which will support our research now and into the future. The team is also providing support to help recruit new users to the service. On the balance of hardware & software expertise and provision of support services, OCF came out on top.”