ENGINEERING
SDSC Director Discusses DTF/Teragrid, SciDAC and More
By Steve Fisher, Editor In Chief -- To get a reaction to the DTF/Teragrid announcement, discover a bit more about SDSC’s role in it and to get some perspective on the very impressive developments in the HPC community over the last fortnight, Supercomputing Online sat down with SDSC Director Fran Berman. Supercomputing: Congratulations on the recent DTF/Teragrid announcement. Would you please tell the readers a bit about the “Teragrid Data and Knowledge Management effort” that SDSC will be leading? BERMAN: Absolutely, this is a very exciting part of Teragrid. We believe that the teragrid data/knowledge management part of it will really introduce a new paradigm and it will be a paradigm shift for application developers and the science and engineering community. If you really think about it, scientists today have a limited amount of data. They go out to the big freezer and they bring in data and their able to use it with their applications. Teragrid will enable them to have lavish amounts of data. They’ll be able to use it to online analyze it, synthesize it into knowledge, mine it and it will really enable them to add some new breakthroughs in science. As far as what we’re going to do…we’re very interested in developing, at SDSC in particular, a whole data oriented configuration. So with our configuration we plan to have the best data-oriented configuration for applications in the world and the idea is to make it possible through our materials, through our hardware and software development to really focus on data oriented applications. Supercomputing: Does undertaking such a large-scale project like DTF/teragrid that depends so heavily on clustered systems signal the demise of the stand-alone supercomputer system? BERMAN: I think absolutely not. I think that something like Teragrid really portends to a future where you can combine all sorts of resources including very important stand-alone supercomputers. In fact, in Teragrid we’re going to have two very large systems at SDSC and NCSA that would land on the top ten list today. So I think it’s a very exciting vision that combines supercomputer systems, large amounts of data, fast networks and remote instruments so we can do very complex applications on the platform I think that we need to support the things we’re doing. Supercomputing: Does a system like DTF/Teragrid have to be prioritized? Will it be prioritized? If so, what sort of research will be the primary benefactor of the greater time and/or resources? Who decides how much time is awarded to whom? BERMAN: I think the Teragrid system is just a very complex endeavor. We’re looking at cutting edge gear, we’re looking at cutting edge software, we’re looking at cutting edge applications, not to mention the very challenging problem of social engineering with a staff that’s going to be distributed over four sites including our applications folks from all over the nation and all over the world. So, I think we’re looking at a complex system. To enable us to make it work, I think absolutely we have to have a strong management structure, a strong stage development effort in terms of procurement and acquisition of the hardware, development of the software, bringing applications on to the system etc. You know, like any other complex system, and actually the PACI program: Sand Diego Supercomputer Center, NPACI, NCSA and the Alliance have a lot of experience with exactly these sort of complex processes to bring people up to speed for production use on big systems. I think the kind of research that will be the primary benefactor will be research that, you know any of the kinds of things PACIs are running now on large systems will be able to use the Teragrid. The idea is that people should be able to use the Teragrid with no more heroic efforts than it takes now to run on our cutting edge platform. So, that being said I think there are a number of applications that will be able to use in a special way the potential of the Teragrid, the hundreds of terabytes, the fast networks the massive number of Flops and I think that in particular NSF’s MRE applications, there are a number of applications carved out in the proposal, high-end applications from the PACI program, etc. will be able to make use of it. Allocation on the Teragrid, of course NSF will be making the award and has its own allocation process and that process will be worked in with the PACI allocation processes and so we’ll be governed to a large extent by that. The interesting thing for allocation is there will be users who want to use the Teragrid absolutely as a grid and so allocating grid resources rather than individual resources is a challenge that we’re dealing with now that will also have to deal with with the Teragrid. Supercomputing: There’s been a lot of amazing activity in HPC recently. What are your thoughts on the recent SciDAC program funding announcements benefiting institutions like LBNL, ORNL, Fermilab and others? BERMAN: And us too by the way. We have some SciDAC activities as well. I think the whole SciDAC program is a very exciting program and I think if you look in aggregate at NSF activities, DOE activities, DOD activities, NIH activities etc., I think that you’re seeing Washington in general and certainly the funding agencies – you know what you see with SciDAC, Teragrid and all of these kinds of things is a recognition of the importance of infrastructure for advances in science. And I think building this infrastructure is absolutely critical to the next generation of scientific advances. So, I think SciDAC is really, really exciting and Teragrid is exciting in much the same way. --------
Supercomputing Online wishes to thank Director Fran Berman for her time and viewpoints. It would also like to thank SDSC’s Dave Hart and Ange Mason for their assistance.
-------- To comment on this story, see the “send your comment” button below.