High-performance computing clusters riding on the power of x86 platforms are slicing into the mainstream. -- Read this example: A new prime brokerage providing trading, financing, portfolio analysis, and reporting for multibillion-dollar hedge funds, needed a competitive edge. Its larger rivals had the advantage of expensive mainframes that could consolidate and analyse millions of trades each day and return reports via batch processing the next morning that measured performance on a monthly basis. So to keep up, this company opted to outclass its competitors by returning trade performance information in near real time with performance measured on a daily basis and performance attribution on multiple levels, including in comparison to other securities in a market sector, numerous benchmarks, and other traders in the firm. What’s more, it did it using an inexpensive computer cluster made up of four dual-processor servers.
This story is a perfect case of where the HPC (high-performance computing) market stands today. Multimillion-dollar systems from Cray, Fujitsu, IBM, and NEC are rapidly giving way to clusters or grids of inexpensive x86 servers from mainstream vendors such as Dell, HP and IBM. “The HPC market has been turned on its ear,” says Earl Joseph, Research VP for high-performance systems at IDC. “Cray, NEC, and Fujitsu now make up less than 1%, while HP and IBM are at about 31% each, with Sun at 15% and Dell at 8.5%.”
Up-and-coming Linux hardware vendors have also picked on this trend and have begun selling stacks of standards-based server clusters into the traditional HPC/technical computing markets, such as higher education, the life sciences, oil and gas, and industrial design. More importantly, however, inexpensive HPC is finding its way into much smaller environments than before, as well as into financial, search, and logistics applications previously outside its province. “The HPC market has shown more growth than any other IT sector,” says IDC’s Joseph, “up 49% in the past two years.” Dell EMEA, Enterprise Business Manager, Kriti Kapoor has some interesting numbers to share. “According to the IDC view on HPC based on industry standard servers, the market grew at 24% last year, which accounted for a $9 b global market. Of this $4.4 b or roughly 50% of market was represented by Intel based systems,” Kapoor adds. Clusters and grids The basic premise of HPC is simple. Instead of running compute-intensive applications on one large, specialised system, high-performance clusters and grids divide up the processing load among anywhere from two to thousands of separate servers, workstations, or even PCs. The actual architecture used, however, will vary depending on the nature of the application and where the hardware resides.
HP Middle East Product Manager, Industry Standard Servers Ryan D’Souza
Forrester Research divides clustering and grid-computing architectures into three categories: uncoupled, loosely coupled, and tightly coupled. The uncoupled architecture, best exemplified by Web server load balancing, is more relevant to handling streams of small requests than for HPC applications. In the loosely coupled architecture, a workload scheduler, usually running on a head server, splits up large application requests into many smaller, parallel, independent tasks and distributes them, along with small amounts of data, among the servers making up the cluster. The job management software may or may not have to aggregate the results. Although many installations consist of a single dedicated departmental or datacentre server cluster, another way to implement low-cost HPC is to distribute work across a number of shared, geographically dispersed resources in what is known as a grid. A grid can run across a few company departments or datacentres, or it can cross company boundaries to partner sites and service providers. Today, however, dedicated clusters are by far the most common scenario. “When I go out and talk to people, I see lots of dedicated clusters running a single application, only a handful of shared grids spanning multiple geographies, and no examples of grids spanning multiple firms,” Forrester’s Gillett says. The x86 platform, which has grown from its roots as an inexpensive, low-end Wintel box that wasn't to be trusted with mission-critical applications, to the most widely purchased server in the world, capable of supporting workloads once limited to expensive mainframes and Unix systems, is clearly the choice for highly demanding HPC arena. And then there are some scenarios for which the entire loosely coupled clustering paradigm is unsuited. Applications such as weather forecasting, seismic analysis, and fluid analysis have to run interdependent calculations that require message-passing among cluster nodes during job execution, according to Forrester Research, which means they need a more tightly coupled architecture. HPC near you Does this mean that high-performance computing is coming to hundreds of enterprise datacentres near you? It depends on whom you ask. “In the beginning with increased requests for HPC clusters built on commodity servers we thought that we hit a new market opportunity. But this is clearly the signs of an emerging market that has been in the making,” says HP Middle East Product Manager, Industry Standard Servers Ryan D’Souza. According to D’Souza, even in the Middle East the trend is evident and he sees HPC beginning to make a lot of sense to customers who operate in mainstream environments like education. “Interestingly, even educational institutes in the region are trying to beef up on R&D abilities, so investing in HPC infrastructure is valuable for them on the long run,” he adds. “We noticed that between 2000-2001 the oil and gas sectors in the region was the main segment that was using x86-based HPC. However, we are now seeing this trend gain momentum in other industry segments like insurance, finance and education. Commoditisation has been a key driver,” notes IBM Middle East, xSeries Manager, Andy Parkinson.
IBM Middle East, xSeries Manager, Andy Parkinson
The role of x86 The x86 platform, which has grown from its roots as an inexpensive, low-end Wintel box that wasn't to be trusted with mission-critical applications, to the most widely purchased server in the world, capable of supporting workloads once limited to expensive mainframes and Unix systems, is clearly the choice for the highly demanding HPC arena. And the power of the x86 architecture, originally developed by Intel, is expected only to grow. Intel and competitor Advanced Micro Devices (AMD) support 64-bit computing along with traditional 32-bit, have introduced dual-core processors and are integrating virtualisation technologies into their silicon to make virtualised workloads perform better. “The primary reason behind the adoption of x86 in HPC is the price vs performance ratio. However, the reason for its popularity in recent years is a combination of factors making its use in HPC a viable option and includes high speed interconnects such as gigabit Ethernet and Infiniband, a substantial decrease in the footprint without performance compromise through rack dense systems and blade servers and through the recent introduction of 64bit and multi-core processors,” says Pantelis Verginadis, NEC Technical Sales Consultant, interFRONTIERS Services. IT buyers can expect updated x86-based systems from the major server vendors this summer - most notably, a new chip architecture from Intel aimed at increasing energy efficiency and boosting performance - with additional enhancements such as embedded security and power management tools following not long after. AMD and Intel plan to debut quad-core processors in 2007. However, we have to note that though industry pundits say the future server architecture for the entire enterprise is a cluster, the server itself is just a component and the server operating system, a device driver. The real OS will be a layer of resource scheduling and allocation software. “With clusters going mainstream on x86, the focus will certainly be on the application itself, which can now be easily scaled across the cluster. Earlier on proprietary technology, maintenance would be the major obstacle. With increased commoditisation of the infrastructure companies can now have easy maintenance, however they need to understand that they will still need strong support on application integration,” D’Souza notes. A full-service approach Today, however, the challenge is building and managing a viable high-performance computing implementation and, particularly have help from software and hardware vendors that can deliver a complete solution.
Iain Jardin, Solutions Architect, Sun Microsystems MEA
Companies are now starting to invest in hardware to support their applications instead of migrating the applications to suit different environments, which becomes a very expensive proposition. An interesting twist to the tale is the fact that industry wide standardisation will now give customers a wider choice to choose from and vendors more pressure to retain customers. The difference and USP, vendors believe, will be in the support and integration services offered. “The challenge naturally lies in the fact that the customer can now choose any product to fit into his infrastructure. However, we expect to retain customers with a continued focus on standards, by offering better hardware and system management tools and building blocks,” Kapoor says. HP on its part says it goes to market with an end-to-end offering right from design to implementation. “We do a full factory integration, offer training and services to customers. Currently in the Middle East we are seeing requests for HPC clusters on the x 86 platforms for upto 500 nodes in the mainstream sectors,” HP D’Souza says. Sun Microsystems similarly is offering customers pre-built and pre-configured rack grids, the software and infrastructure management tool as a package to customers wanting to set up a HPC infrastructure. Trickling into the mainstream High-performance clustering is certainly getting cheaper, but taking advantage of it is not easy or adhoc as it may seem. Clustering in the mainstream is driven also by the fact that software written for it will need to become more accessible to the generic customer. “The industry has had to go through a learning curve in the case of HPC. Primarily these clusters were based on Linux environments and most applications suited for it did not really run Windows. This meant that companies using a Linux-HPC had to build skills to run it. But now with more market proliferation of x86 clusters, tools have also become more accessible and manageable,” IBM Parkinson says. Also, the ISV community including in the Middle East is closely involved in leading vendors to develop more application expertise. “Standardised systems and better tools accessibility has helped HPC on industry standard servers get into the mainstream. What we can expect to see as an industry, driven by the movement in this segment is products available at lower price points, smaller more compact form factors, better performance and more commercially available and optimised applications,” says Iain Jardin, Solutions Architect, Sun Microsystems MEA. “HPC is more about the type of workload and the application drives the trend. Sun for instance is trying to offer more out of the box solutions including out grid software that will help companies scale their applications over a distributed computing architecture,” Jardin adds. While software and tool availability might be the more general direction to the future of HPC, two recent developments specifically hold some promise for pushing HPC more into the mainstream. The first is the entry of Microsoft into the HPC market in the first half of 2006 with Windows Compute Cluster Server 2003. Microsoft is aiming squarely at the applications that now rely on Linux HPC solutions, by partnering with classic HPC application vendors such as Accelrys, MathWorks, Schlumberger, etc who plan to build Windows versions of their HPC applications. “It wouldn’t be too difficult for a biologist to set up a small Windows Compute Cluster of servers in his office rather than having to go to the organisation’s ‘high priest of clustering’,” says Jeff Price, Senior Director for the Windows server group at Microsoft. The second exciting development is the movement toward SOA (service-oriented architecture). Because SOA is inherently componentised, SOA application workloads are easier to distribute across a clustered environment. SOA is all about abstracting away the fundamental plumbing, messaging, multithreading, execution environment in a container done once so the application developer can just focus on writing the application logic. SOA will make grid computing easier and grids will be a must for successful SOA. As a growing number of enterprises begin to see the advantages of cluster and grid computing, however, they will undoubtedly work their way into other mainstream areas. The combination of more widespread use, easier multi-OS based clustering, and SOA may indeed make high-performance clustering and grid computing a fairly mainstream enterprise application.