Open Systems Lab announces the release of LAM/MPI 7.0

BLOOMINGTON, INDIANA — Indiana University's Open Systems Lab (http://www.osl.iu.edu/) has released version 7.0 of its widely-deployed LAM/MPI parallel computing middleware. LAM/MPI is an open source implementation of the Message Passing Interface (MPI) standard Ð software that researchers and developers use to enable clusters and Grid-enabled computers to operate in parallel on a single problem. LAM/MPI offers high performance on multiple platforms, including Linux, Sun Solaris, SGI IRIX, IBM AIX, HP-UX, and Mac OS X, and supports multiple communication interconnects such as shared memory, Gigabit Ethernet, and Myrinet. The most significant change to LAM/MPI for version 7.0 is its new open component architecture. The new modular design provides a flexible "plug-in" framework for selecting and changing run-time components and tuning parameters without the need to recompile user applications. MPI researchers can also easily extend the MPI implementation itself. "Open software and – more importantly, open interfaces – are essential for allowing software to be reused, refined, and continually developed by a user community," said Dr. Andrew Lumsdaine, director of the Open Systems Lab. "LAM's new open component architecture greatly simplifies the task of extending LAM's functionality." Other significant additions to this release are new systems administration features, which make LAM ideal for use in production environments. "Using LAM's PBS interface and checkpoint/restart capabilities, system administrators can have fine-grained control over the scheduling of jobs in their cluster," noted Lumsdaine. New support for Globus allows MPI jobs to be executed across multiple administrative domains using the Grid. "Resource scheduling is a big part of what systems administrators do every day," added Jeff Squyres, senior research associate in the Open Systems Lab and lead developer of LAM/MPI. "The ability to checkpoint and restart parallel MPI jobs – even legacy MPI applications – is a critical feature in terms of logistical efficiency and fault tolerance." Developers writing MPI programs will benefit from support for the Etnus TotalView parallel debugger in LAM/MPI 7.0. "Besides being MPI implementers; we're also MPI application developers," commented Squyres. "Using TotalView helps us develop LAM/MPI itself as well as parallel scientific applications." To find out more about the new features of LAM/MPI 7.0, or to download the software, visit: http://www.lam-mpi.org/