Penn State Selects InfiniCon for Massive HPC Cluster

Penn State University Selects InfiniCon Systems’ 10Gbps InfiniBand Switch Fabric For Massive High Performance Compute Cluster -- ‘Off-the-Shelf’ Supercomputing Solution with 1.3 TeraFlops of Performance Places the Penn State Cluster in Top 100 on the ‘TOP500’ List -- InfiniCon Systems, the premier provider of shared I/O and switching solutions for next-generation server networks, announced today that The Pennsylvania State University (PSU) has selected InfiniCon to supply the network fabric for a 160-node High Performance Computing (HPC) cluster. InfiniCon’s InfinIO Switch Series – based on the InfiniBand“ Architecture – will provide a 10Gbps interconnect for a heterogeneous computing cluster that combines servers from Sun (SunFire V60x) and Dell (PowerEdge 2850) running on the Linux operating system. The HPC cluster at PSU is expected to yield more than 1.3TeraFlops of processing power, and will be used primarily by the gravitational physics community at the university to facilitate research projects in detecting gravitational waves as a tool for making astronomical discoveries when studying the universe. Upon installation, the InfiniBand-based cluster will position PSU to claim a spot among the 100 most powerful computing systems worldwide, as rated by the ‘TOP500’ List of the World’s Fastest Supercomputers. The TOP500 list, compiled twice annually by researchers at supercomputer centers in California, Tennessee, and Germany, will next be updated in the spring of 2004. PSU already has two supercomputers ranked in the top 30 percent of the list, last released in November 2003. "A prime reason we chose the InfinIO platform was InfiniCon’s proven capability to implement larger fabrics," stated Vijay Agarwala, Director of the High Performance Computing group at Information Technology Services of Penn State University. "We support a large contingent of users whose application requirements are compute-intensive. InfiniCon’s leadership in field-testing and deploying large node-count InfiniBand configurations mitigates deployment risks and lets us focus on creating a superior computing environment that can keep pace with the university’s needs." InfiniBand’s standards-based performance advantages make it an ideal interconnect for building and scaling HPC and database clusters. Offering unmatched bandwidth (10Gbps to 30Gbps) and extremely low switching latency (as low as 5 microseconds for message passing applications), an InfiniBand-based network fabric permits end-users to build incredibly powerful clusters from commodity computing components – at costs as much as 90% lower than proprietary supercomputing solutions. More than 40 percent of the supercomputers on the 2003 TOP500 list rely upon a clustered architecture. "We continue to see increasing traction by InfiniBand-based systems in the HPTC market," stated Jamie Gruener, Senior Analyst at The Yankee Group. "InfiniCon’s investment in robust, scalable systems is proving to enable InfiniBand to be used in these compute-intensive environments to provide massive computing power at truly disruptive levels of price/performance." InfiniCon’s InfinIO Switch Series packs up to thirty-two 10Gbps InfiniBand ports into only 1U of rack space, making it the densest switching solution in the industry. Introduced in 2003, the InfinIO Switch Series has previously been selected as the switching fabric for the world’s largest LINUX supercomputer cluster, installed at RIKEN of Japan (512-nodes), as well as at other large-scale sites, including Texas A&M University. InfiniCon offers a completely integrated solution for designing InfiniBand fabrics – from the adapters, software, and middleware required for the host environment, to the award-winning multi-protocol switching fabric solutions that allow InfiniBand clusters access to fibre channel and Ethernet resources. "InfiniBand continues to assert its value into the mainstream as an extremely attractive technology option that satisfies the requirements of large, ‘always-on’ compute environments through a very affordable economical model," stated InfiniCon Chief Technology Officer, Todd Matters. "InfiniCon’s selection by Penn State lengthens our leadership in supplying robust, 10Gbps networking solutions that are ready for the most scalable high performance computing applications today."