InfiniCon Systems and Scali Team to Demonstrate Leading-Edge Application Scaling

Leading Providers of High Performance Computing Solutions Showcase 10Gbps InfiniBand Clusters Running Fluent Application Sets. InfiniCon Systems, the premier provider of I/O virtualization and clustering solutions for next-generation server networks, announced today a series of industry-first demonstrations done in partnership with leading vendors to showcase the power and scale of the InfiniBand(R) Architecture at ClusterWorld, April 6-8 (San Jose). InfiniCon's InfinIO(TM) family of InfiniBand-based solutions - which provides up to a 30Gbps, low-latency infrastructure for building ultra-scalable computing fabrics - will be integrated into a number of real-world application environments: -- Fluent Applications for Computational Fluid Dynamics (InfiniCon Booth, # 413) InfiniCon's InfinIO technology is featured in a demonstration of Hewlett-Packard's Linux-based, ProLiant DL140 servers clustered together to run application sets that illustrate the effects of computational fluid dynamics modeling software from Fluent Inc., as applied to the design and testing of jet aircraft. In this environment, multiple commodity servers process a single large computational job as a single image, offering higher levels of performance and availability at lower cost than is traditionally available from proprietary systems. Leveraging the power of InfiniBand architecture, the InfiniCon fabric - comprised of high-performance, low latency switching systems and host channel adapters - provides a 10Gbps network to allow the clustered nodes to work in tandem. In addition, the performance-optimized implementation of Scali MPI-Connect(TM) middleware from Scali ensures that high performance is combined with high-availability features that protect against faults in the network, interconnect, or routing tables. We're excited about our partnership with InfiniCon and our ability to deliver truly high-end cluster performance, commented Hakon Bugge, CTO for Scali. Scali's platform-independent MPI simplifies adoption of the InfiniBand technology by virtualizing the interconnect layer. Leading applications such as Fluent, that already support Scali MPI Connect, can automatically run on the InfiniCon platform without making changes to the application or MPI. HPC Express (Intel, Booth 401) InfiniCon's InfinIO 7000 Shared I/O and Clustering System is being showcased as part of Intel's HPC Express demonstration, which features the next-generation industry standard PCI Express technology implemented into a 72-node, InfiniBand-based high performance computing cluster configured with open source and off-the-shelf hardware and software components from ten vendors. The InfinIO 7000 is used to provide a 10Gbps interconnect and embedded fabric management for a segment of the cluster, illustrating InfiniBand's power to accelerate data throughput and optimize application performance as Xeon*processor-based server systems are scaled. Intel Architecture based clusters - coupled using PCI Express technology inside-the-box and InfiniBand outside-the-box - exhibit unparalleled performance gains for users of high-performance computing, said Jim Pappas, director of Initiative Marketing for Intel's Enterprise Platform Group. The HPC Express demonstration at ClusterWorld clearly shows the strategic value of standards-based, InfiniBand fabrics coupled with PCI Express* technology to enterprise-class, commercial HPC environments. In addition, InfiniCon's InfinIO Switching Systems will be utilized in a demonstration by Penguin Computing (Booth # 201) to provide the low-latency, 10Gbps interconnect for an Opteron cluster running Scyld Beowulf cluster software. InfiniCon will also be showcased in HPC clustering demonstrations within the AMD Pavilion (Booth # 313) in partnership with Appro and Rackable Systems.