SDSC Dashes Forward with New Flash Memory Computer System

Large-Memory Resource First of its Kind among Major HPC Systems

Leveraging lightning-fast technology already familiar to many from the micro storage world of digital cameras, thumb drives and laptop computers, the San Diego Supercomputer Center (SDSC) at the University of California, San Diego today unveiled a “super-sized” version – a “flash” memory-based supercomputer that accelerates investigation of a wide range of data-intensive science problems. 

The new High-Performance Computing (HPC) system, dubbed “Dash,” is an element of the Triton Resource, an integrated, data-intensive resource primarily designed to support UC San Diego and UC researchers that went online earlier this summer. As envisioned, this “system within a system” will help researchers looking for solutions to particularly data-intensive problems that arise in astrophysics, genomics and many other domains of science.

While Dash, which already has begun trial runs, is a medium-sized system as supercomputers go with a peak speed of 5.2 teraflops (TF), it has several unique properties, including the first use of flash memory technology in an HPC system, using Intel High-Performance SATA Solid-State Drives.  Four of its nodes are specially configured as I/O nodes each serving up 1 terabyte (TB) of flash memory to any other node, courtesy of new I/O controllers also developed by Intel Corporation and integrated by Appro International, Inc. (One terabyte equals one trillion bytes of storage capacity).

The system features 68 Appro GreenBlade servers with dual-socket quad-core Intel Xeon processor 5500 series (formerly codenamed Nehalem) nodes linked to an InfiniBand interconnect. In its current configuration, Dash has 48 gigabytes (GB) of DRAM memory on each node, and employs vSMP Foundation software from ScaleMP, Inc. that provides virtual symmetric multiprocessing capabilities and aggregates memory across 16 nodes into shared memory “supernodes”, giving users access to as much as 768 GB of shared DRAM memory in addition to 1 TB of flash memory per “supernode”.