Q&A with Scott Metcalf, CEO of PathScale

By Chris O'Neal – PathScale develops innovative software and hardware solutions to accelerate supercomputing. The PathScale InfiniPath interconnect helps deliver on the promise of Linux clusters by significantly lowering communications latency, helping to improve the performance of complex applications. Plus, the PathScale EKOPath Compiler Suite drives Linux clusters to performance levels that can exceed the world's most powerful supercomputers. Today, PathScale technologies are enabling scientists, engineers and researchers to more effectively solve a whole new class of computational challenges, from weather modeling and aerospace design to drug discovery and oil and gas research. To learn more, Supercomputing Online interviewed Scott Metcalf, CEO of PathScale. SC ONLINE: What is PathScale trying to solve? METCALF: At PathScale, our goal is to provide the technologies that drive breakthroughs in science and engineering. Our products are enabling scientists and researchers to truly unlock the benefits of cluster computing systems. Until now, the promise of cluster computing hasn’t been fully realized because many applications are unable to scale effectively. Using our products, the scientific and engineering communities can now leverage clusters to run simulations that were once reserved for supercomputers and large-scale SMPs. Most importantly, they can do it at an affordable price. SC ONLINE: What are your core products being delivered today? METCALF: Today, we have two products that enable users to cost-effectively scale their applications on cluster systems. The first is an award winning Linux compiler suite called EKOPath™, which allows users to get the absolute best performance from their C, C++ and Fortran applications on 64-bit x86 processors, such as the AMD Opteron. EKOPath is generally acknowledged as the highest performance 64-bit compiler suite, and has won multiple awards. Our newest product, the InfiniPath™ InfiniBand Adapter, is the industry’s highest performance and lowest latency cluster interconnect. InfiniPath lets users move data more rapidly and more efficiently between nodes in a cluster. It uses a standard InfiniBand switching fabric to deliver performance levels that haven’t been experienced before. It does this by eliminating the bottlenecks that increase communications latency in clusters. In essence, InfiniPath allows processors to do their work more efficiently, enabling users to solve problems faster. SC ONLINE: How does the InfiniPath interconnect provide value to end-user customers? METCALF: The real value InfiniPath provides is enabling a faster time-to-solution at an affordable price tag. It’s no secret that researchers and engineers need to solve problems very quickly. Providing the ability to solve a problem in 6 hours rather than 24 has enormous benefits to our customers. Another PathScale advantage is the ability to solve larger and more interesting problems. If you can solve a finite element analysis with more and more points or a molecule with a greater number of atoms, you are pushing the state- of-the-art in your field. And you can do this in a more cost-effective way. With InfiniPath, you can work with a low-cost commodity switching fabric rather than a proprietary fabric, and because of our performance advantage, we can provide better computational efficiency. SC ONLINE: In what industries is InfiniPath being deployed? METCALF: The initial users of InfiniPath are in high performance computing. This includes universities, government research laboratories and federal defense agencies, as well as commercial customers that work with simulations and modeling applications. The benefits realized vary depending on the industry. For instance, in weather forecasting our technology helps improve forecasting models that allow meteorologists to better understand what’s going to transpire faster with greater precision. For automakers, InfiniPath enables engineers to have more grid points and more data analyzed in airflow and crash analysis. That means they can build better, safer cars and bring them to market faster. SC ONLINE: Can you discuss some of your customer installations? METCALF: UC Davis was our first significant installation. They needed an HPC system that could deliver high message rates and very low latency on both large and small applications. They put together a rather large cluster with 36 nodes, each with 2 AMD dual core Opteron processors, for a total of 144 processors, which is being used by the Geology Department to solve problems in computational chemistry. UC Davis’ decision to go with an AMD Opteron-based system with our InfiniPath InfiniBand interconnect was based solely on the need for real-world performance. They were intrigued that InfiniPath attaches directly via HyperTransport to the CPU. After installing the system, the UC Davis IT and research teams found that the combined InfiniPath/AMD solution delivered one-third the latency of competing products. Today, it’s enabling their researchers to complete projects that used to take 3 months in just a few weeks. And they are doing it on an affordable machine. SC ONLINE: PathScale has been touting its performance in the recent benchmarks, such as the HPC Challenge. Why is this important? METCALF: The HPC Challenge and other well known benchmarks finally prove to the world what we’ve known for a long time: InfiniPath can scale applications on clusters to performance levels that exceed larger, more expensive supercomputing systems. The benchmarks were performed at the AMD Developer Center on the Emerald cluster, AMD’s newest and largest system at the center. The 512-core Emerald, with InfiniPath InfiniBand interconnect, outperformed the 2,048 processor IBM Blue Gene, the 4,096 processor Cray XT3, and the 1,008 processor SGI Altix in the Random Access (GUPs), Random Ring Latency, and Natural Ring Latency benchmarks. These numbers should demonstrate to any organization that is looking to purchase an HPC system that they no longer have to buy from the traditional, high-priced supercomputing suppliers. They can now leverage clusters to tackle their most complex applications, for a fraction of the cost. By the way, the AMD Emerald cluster is publicly available for any HPC end-user to remotely benchmark their applications and experience the InfiniPath advantages for themselves. SC ONLINE: There was a lot of noise about InfiniBand and OpenIB at SC05. How important is that to your products? METCALF: InfiniBand came on the scene a few years ago and was hailed with great promise. Clearly it provided a basic framework for having a standards solution for interconnecting processors. Unfortunately, prior InfiniBand products did not deliver on that promise. Today, with the InfiniPath adapter and the emergence of OpenIB, we are finally making InfiniBand a reality in the marketplace, and that was acknowledged at SC05. The key is providing customers a true standards solution, using a standard fabric with the InfiniPath adapters hooking into that fabric. This allows users to build clusters that allow applications to scale to address larger problems more efficiently. SC ONLINE: Can you talk about the core technology that enables InfiniPath to achieve this high performance? METCALF: The secret sauce is that we have a very simple, clean design. We connect directly on the host side with AMD Opteron processors via the high-performance HyperTransport bus. This gives us extremely low latency and enormous bandwidth relative to the network side. On the network side, we attach directly to standard InfiniBand switches and in the middle we just move messages back and forth with extreme efficiently. We pipeline the messages going out; we pipeline the messages coming in. This provides low latency from the adapter and from the host to the chip, as well as from the chip to the network. In addition, we have a cut-thru design that ensures very low latency on the receive side, which means we can support enormously high messaging rates. All of this leads to unprecedented performance for modeling and simulation applications. Supercomputing Online wishes to thank Scott Metcalf for his time and insights.