ACADEMIA
NVIDIA to Sponsor New Stanford Parallel Computing Research Lab
Pervasive Parallelism Lab Exploits the Capabilities of Parallel Computing: NVIDIA Corporation has announced that it is a founding member of Stanford University’s new Pervasive Parallelism Lab (PPL). The PPL will develop new techniques, tools, and training materials to allow software engineers to harness the parallelism of the multiple processors that are already available in virtually every new computer. NVIDIA’s investment complements the company’s ongoing strategy to solve some of the world’s most computationally intensive problems with its market-leading GPUs and world-class tools and software. The company has enjoyed significant success to date with its Tesla line of GPU computing hardware solutions and, more importantly, with CUDA technology, its award-winning programming environment that gives developers access to the massively parallel architecture of the GPU through the industry-standard C language. “Parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fueled the computing industry, and several related industries, for the last 40 years,” says Bill Dally, chair of the computer science department at Stanford. Until recently, computer installations delivering massive parallelism could only be deployed in large-scale computer centers with hundreds to thousands of separate computer systems. With the recent introduction of many-core processors such as the GPU and the multi-core CPU, most new computer systems come equipped with multiple processors that require new software techniques to exploit parallelism. Without new software techniques, computer scientists are concerned that rapid increases in the speed of computing could stall. From fundamental hardware to new user-friendly programming languages that will allow developers to exploit parallelism automatically, the PPL will allow programmers to implement their algorithms in accessible, “domain-specific” languages while at deeper, more fundamental levels of software, the system would do all the work for them in optimizing the code for parallel processing. “NVIDIA has been tackling parallel computing challenges since its founding and, as a result, the GPU has evolved into an incredibly powerful processor, capable of running thousands upon thousands of concurrent operations,” said David Kirk, chief scientist at NVIDIA. “We applaud, and are proud to be a part of, Stanford University’s formation of the PPL and its mission to push the software industry to expose the inherent parallelism in today’s computers.” NVIDIA GPU technology, combined with the CUDA programming environment have delivered speed increases anywhere from 8× to 50× over conventional processing technologies. Some examples include:
- Seismic Imaging www.hess.com 45×
- AutoDock Protein Docking www.siliconinformatics.com 12×
- Financial Options Pricing www.hanweckassoc.com 50×
- Medical Imaging www.techniscanmedicalsystems.com 8×
- H.264 Video Transcoding www.elementaltechnologies.com 19×
- EDA – SPICE Simulation www.nascentric.com 8×
NVIDIA joins with AMD, Hewlett Packard, IBM, Intel, and Sun Microsystems in this venture. For more information on NVIDIA GPU Computing solutions, please visit its Web site.