SYSTEMS
InfiniCon Systems Powers China's Research Grid Project
Company Selected to Supply Large-Scale HPC InfiniBand Fabric For Shandong University; Cluster Among Initial Sites To Be Connected Into World's Largest Education and Research Grid -- InfiniCon Systems, the premier provider of I/O virtualization and clustering solutions for next-generation server networks, announced today that Shandong University of the People's Republic of China has selected InfiniCon to supply a 10Gbps switch fabric for a 96-node, InfiniBand-based cluster that will be an integral site in the China Education and Research Grid. The selection by Shandong University was facilitated through InfiniCon's OEM relationship with Langchao Beijing - China's leading HPC and commercial IT solutions vendor -and through Shanghai Hong Ri International, an InfiniCon ASAP program channel partner in China. One of China's oldest and most prestigious universities, Shandong will deploy the high-performance computing (HPC) cluster - using InfiniCon's InfinIO(TM) 3000 Switch Series as the interconnect - to support projects for biopharmacy, physics, mathematics, and other disciplines. Twenty-one research centers and thirty colleges within the university will have access to the computational power of the Intel Xeon-based cluster, which yields 749 Gflops of peak performance. Located within the new Shandong Province HPC Center, the cluster has been designated as one of the initial dozen clusters to be connected to the China National Grid, sponsored by the Ministry of Education. One of the largest and most ambitious grid projects to date, the national Education and Research Grid was launched in October 2003; by 2006, it will link more than 200,000 users from approximately 100 sites across China. Ultimately, the grid will harness computing power sufficient to create a virtual "shared supercomputer" capable of 15 trillion calculations per second. Universities such as Shandong will connect to a common virtual hub that will automatically find appropriate application resources, from life sciences applications to video courses and e-learning. China's university system is expected to save significantly on development costs, as each institute will focus on its expertise area and tap into other applications as needed via the grid. InfiniCon's InfinIO switching technology was also recently selected for a 128-node HPC cluster by Tsinghua University, another of the initial sites to be integrated into China's national grid project, and the third InfiniBand site in the project overall to date. InfiniCon is the supplier for all InfiniBand solutions in the grid currently. "HPC users are actively looking to InfiniBand to provide standards-based cluster interconnects and I/O channels that can compete at the high end of the bandwidth scale, and the low end of the latency scale," stated Earl Joseph, Vice President for High Performance Computing at IDC. "Many, if not most, HPC customers are exploring grids in order to improve communications between institutions, manage resources within an institution, and to maximize resource utilization. InfiniCon's large installations at the China Education and Research Grid represent important test beds for InfiniBand-based cluster computing." Langchao Beijing is the first Chinese company to incorporate InfiniBand into its HPC server offerings, which include Langchao's Intel-based, TS10000 solution that Shandong chose for its cluster. "The Shandong University project is truly a great milestone toward establishing InfiniBand technology in China," said Mr. Leijun Hu, chief technology officer, Langchao Beijing. "The stability of InfiniCon's products and the bandwidth and latency achieved with InfiniBand technology enable us to provide our HPC customers with the very best price-performance interconnect available." InfiniBand: The Premier High-Speed Interconnect InfiniBand is becoming the interconnect technology of choice for high-speed networking, due to its superior bandwidth (up to 30Gbps), ultra-low switching latency across large fabrics, and Remote Direct Memory Access (RDMA) capabilities. Benchmarks derived by Langchao and Shanghai HongRi from initial deployment of the cluster at Shandong University have been dramatic; the 96-node Xeon cluster has attained 749 Gflops of performance - while providing bandwidth of up to 800mb/second and enabling servers to communicate at latencies as low as 5.6 microseconds. Additionally, as CPUs were scaled to the cluster, the InfiniCon fabric provided over 70% average efficiency from incremental nodes, proving InfiniBand to be the highest ROI interconnect available. "The IT world continues to march toward fabric and grid models of computing," noted Charles Foley, executive vice president at InfiniCon. "Discriminating end-users such as Shandong University and leading IT vendors such as Langchao Beijing are demanding ultra high-performance networks to be the foundation of these fabrics, which are easily deployed and based upon open industry standards. The global installed base of our InfinIO solutions has demonstrated the efficacy of InfiniBand technology in numerous large-scale production venues, driving it forward as a standard that is unmatched in price, performance, and stability."