ACADEMIA
Supersizing the supercomputers: What's next?
At the William R. Wiley Environmental Molecular Sciences Laboratory, the Linux-based supercomputer is composed of nearly 2,000 processors. Supercomputers excel at highly calculation-intensive tasks, such as molecular modeling and large-scale simulations, and have enabled significant scientific breakthroughs. Yet supercomputers themselves are subject to technological advancements and redesigns that allow them to keep pace with the science they support. The current vision of future supercomputers calls for them to be very heterogeneous-for example, rather than a central processing unit (CPU) with memory, disk and interconnect, the CPU will contain cores of smaller CPUs making up a larger whole-and have different types of processors, such as vectors and field programmable gate arrays (FPGAs). The location and type of memory will be more complex as well.
High performance components-encapsulated chunks of software that perform specific tasks-will be coupled to a dynamic framework that allows the scientists and the software to dynamically determine the algorithms or modifications to algorithms that will perform well on a particular architecture. Multiple levels of parallelism will be explored, including parallelism at the component level, parallelism within the component, parallelism within a subroutine and threading. These supercomputers of the future will provide orders of magnitude more computing power, but their increasing complexity also requires experts in computational science, mathematics and computer science working together to develop the software needed for the science.