SCIENCE
Dona Crawford addresses China's rise as supercomputing power
- Category: SCIENCE
The question was raised in the popular and computing trade press following the release of the latest Top500 list of the world's most powerful supercomputers at the November 2010 Supercomputing conference in New Orleans. Chinese supercomputers were ranked Nos. 1 and 3 on the industry-standard benchmark. However, the question of global high performance computing (HPC) dominance is more complicated and nuanced than the Top500 ranking indicates.
Crawford addressed the many dimensions of supercomputing in a response delivered to the Commonwealth Club on Feb. 23:
For years I've listened to Commonwealth Club broadcasts, attended its events, and read its magazine because, as a scientist, I value the airing of all sides of important issues and discussion topics. In the world of R&D where I live, it is this friction of competing ideas that often sparks innovation.
Dona Crawford
I was asked to answer the question: Has China surpassed the U.S. in supercomputing? Since the world revels in sound bites, I'll start off with a sound bite answer to that question. If you consider the Top500 list as your metric, the answer is yes, China has surpassed the U.S. in supercomputing. I know however because you're at the Commonwealth Club that you're interested in more than just a sound bite, but for grins I'll give you another, different, sound bite answer to the same question later in my talk. What's more important than either sound bite is why you care about the answer to the question posed for tonight's discussion. To get started, let me provide a little background. Part of that context is terminology. I will use the acronym HPC and the phrase high performance computing, interchangeably with the word supercomputing. My talk will focus on supercomputing but be advised what's today's supercomputing is what's in tomorrow's electronic devices in your pocket or on your wristwatch. Nonetheless, my talk focuses on supercomputing. Currently, the world's fastest supercomputer systems perform at a rate of quadrillions of floating point operations per second or petaflops also represented as 10E15 operations per second, and we have already begun thinking about how to transition from the petascale era to the exascale era, or 10E18, a three-orders of magnitude improvement.
For some background; over the last 10 to 20 years supercomputers have transformed the way we conduct scientific research and enabled discovery and development across a broad set of disciplines from physics, chemistry, and bioscience to engineering. Simulation -- the ability to virtually mimic physical phenomena with great accuracy -- is now considered a peer to theory and experiment, and a pillar of the scientific method defined by Isaac Newton more than 300 years ago. The simulation capabilities that supercomputers allow have advanced medicine, energy, aviation and even manufacturing domains...like the packaging of potato chips. The massive, complex simulations that run on supercomputers allow us to explore fields, such as global climate change, as well as tackle problems for which experiments are impractical, hazardous or prohibitively expensive.
Let me provide a concrete example from my experience at Lawrence Livermore where we apply high performance computing to national security issues. The large scale, high-fidelity simulations describing nuclear weapons have allowed the National Nuclear Security Administration to ensure the safety, security and reliability of the nation's nuclear deterrent without underground nuclear testing. No underground nuclear tests have been conducted since 1992 and no new weapons have been introduced into the stockpile. Nuclear weapons are extremely complex devices, with thousands of components made from a variety of materials that must work together seamlessly to produce a nuclear detonation. Plastics can break down and give off potentially destructive gases, metals can corrode and weaken, and coatings can deteriorate. Some materials may change properties unpredictably in response to the high radiation fields, fluctuating temperatures, and other environments to which nuclear weapons are subject. In the absence of developing new nuclear weapons, experts must work to extend the life of existing units and understand how their constituent materials and components age.
Being able to do this has allowed assessment of the stockpile but also has allowed dismantlement of the stockpile under treaty agreements at an accelerated pace and given our country the confidence to enter into a new START treaty with Russia. None of this would have been possible without the power of supercomputing.
Indeed, it was time-urgent questions about the safety and reliability of the nuclear stockpile that drove the U.S. Department of Energy and its National Nuclear Security Administration to invest in and develop the generations of supercomputers that have dominated global HPC, beginning in the early 1990s. These computing capabilities, developed for national security have broad applications in the scientific community and also in industry. For example, Boeing used supercomputers at Oak Ridge National Laboratory to accelerate the design of its 787 and 747-8 airliners. Boeing's Doug Ball summarizes the major benefits of HPC this way: "It lets engineers design better airplanes with fewer resources, in less time, with far less physical-simulation based on wind tunnel testing," he says. "For example, when we were designing the 767 back in the 1980s, we built and tested about 77 wings. By using supercomputers to simulate the properties of the wings on recent models such as the 787 and the 747-8, we only had to design seven wings, a tremendous savings in time and cost." Imagine the savings of this 10x improvement.
Those two examples describe how supercomputers underpin the scientific discovery, technological advancement and engineering innovation critical to the nation's national security, economic competitiveness and quality of life. Beyond those examples, computers make us more productive in many ways. In education, they help broaden our thinking. In hospitals, we have better diagnoses. In business, they are used to keep track of stocks of raw materials as well as finished products. In banks, they are used for day-to-day processing of customer's accounts and payments, etc., etc. These examples are about using the computer to process information. But the technology underpinning the computer is itself important and has fueled the Internet, GPS, the cell phone, safety devices in your car and mass international communication through social media. For years, we've also grappled with using computers for artificial intelligence. Ray Kurzweil's 1990 book the "The Age of the Intelligent Machines" portends a time when computers will be able to think. Without going into all the ins and outs and ups and downs of AI, I will just reference the IBM computer Watson that won on Jeopardy last week. Did you see it? I was jazzed! The beauty of Watson is its the ability to process natural language, make sense of it and then do what computers do best: very quickly look through LOTS of data. Wouldn't you love to have a Watson behind the scenes when you're pushing buttons at a call center trying to get an answer to a simple question? Or helping you with a Web search to a natural language question?
This then is the context for why computers and supercomputers in particular are so important. Even if you don't use a supercomputer, most electronic devices you use today are a product of supercomputer technology of 5-8-10 years ago. Supercomputers are at the foundation of our advancement for almost anything you can think of and that's why you care about our leadership in HPC.
Now let's get back to my sound bite response to the question has China surpassed the U.S. in supercomputing? In my sound bite, I referred to the Top500 metric. Let me explain. Twice a year starting in 1993, a list has been generated of the top 500 supercomputers in the world. The Top500 list, as it's called, is published at the International Supercomputing Conference held in Germany in June and is published again at the U.S. Supercomputing Conference held in November. An algorithm for solving a dense system of linear equations, called the LINPACK benchmark, is run on each system entered into the competition. The computers are ranked by their performance on this benchmark and the result is a single number, the measure of a computer's floating-point rate of execution. This performance number does not reflect the overall performance of a given computer system, because no single number can do that, but it does provide a means of comparing computer systems on a common problem. Let me reiterate, the Top500 is not the end all and be all (more about that later), but it is useful because it is widely accepted and understood.
Press reports following the release of last November's Top500 list called it a "Sputnik moment" for the United States. For the first time, Chinese supercomputers vaulted to the Nos. 1 and 3 rankings on the list, displacing HPC systems at Oak Ridge and Los Alamos national labs. The current No. 1 system on the Top500 list is the Chinese Tiahne-1A system at the National Supercomputer Center in Tianjin, achieving a performance level of 2.57 petaflops. No. 2 on the list is Jaguar, a Cray XT5 system at Oak Ridge National Laboratory in Tennessee with 1.76 petaflops - quite some distance behind when systems are usually neck and neck. The No 3 computer is another Chinese system in Shenzhen, followed by a Japanese system in Tokyo. The next six systems are U.S., France, U.S., U.S., Germany, and U.S., respectively.
This represents a dramatic change from just a few years ago. Again using the latest Top500 as our metric, the total number of Chinese systems out of 500 is 41. Six months earlier, last June, the Chinese had only 24 of the top 500 systems. Over the years, from 1993 until November 2001, the Chinese have had zero, one or two systems on the Top500 list. But in 2001, they began their push for the top. So you can see this has been a long, 10-year haul and it is truly laudable.
So yes, China has the world's fastest supercomputer and the third fastest supercomputer on the Top500 list. As seen earlier, high performance computing is essential both for national security and for industrial competitiveness in the world economy. China's major investments in HPC show they recognize this and are willing to focus money, energy and creativity in this direction. While this was a stunning achievement, this does not mean China has surpassed the U.S. in supercomputing. To address this question, it's important to look, not just at the hardware, but the software and applications running on these systems, what I call the computing ecosystem. It includes the physics underlying some particular aspect of reality, the mathematical models that describe the physics, the algorithms that turn those models into solvable computer code, and yes the computer architecture, but also the low level software such as compilers and debuggers, performance optimizers and load balancing tools, the networking hardware and software, the storage systems and the visualization capabilities - all of which make the computer hardware usable. The U.S. has been working for six decades (China 10) on the computing ecosystem in a balanced, sustained way, and we can now point to discoveries, scientific insights and accomplishments, such as those I previously described, enabled by U.S. supercomputing systems have been road tested, have racked up countless compute miles, and have more science and technology milestones to show than their Chinese counterparts.
Let's take a closer look at the most recent Top500 list. Half the supercomputers in the top 10 are U.S. systems. A number of the foreign systems on the list are based in whole or in part on U.S. technology, including the Chinese Tiahne-1A, which uses compute node technology from Santa Clara-based NVidia.
The U.S. has held the No.1 spot on this list for 24 out of 36 editions (Japan 11, China now 1). In addition, 274 (compared to China's 41) or almost 55 percent of the systems on the list are in the U.S., and U.S. hardware companies manufactured 90 percent of the Top500 systems. Of the top five U.S. systems, they are at Oak Ridge National Laboratory (No. 2), Lawrence Berkeley National Laboratory (No. 5), Los Alamos National Laboratory (No. 7), No. 8 again at Berkeley and No. 10 a partnership between Sandia National Laboratories and Los Alamos. Note, all five of these systems are at Department of Energy national labs. These labs provide much more than systems. My data is best of course for my laboratory. Livermore is a national security lab, along with Los Alamos and Sandia. Our mission is to do long term research and development to help understand and solve pressing national security problems broadly defined -- such as energy security, environmental security, economic security and certainly what one normally thinks about as traditional national security, namely stewardship of our nuclear weapons stockpile.
A few months after Livermore Lab was established in 1952, we had our first supercomputer. Computing is in our blood. We use computers to study the world around us, to try to understand the intricacies of physical phenomena that occur at vanishingly short time scales and at extreme pressures and temperatures.
Recently Livermore teamed with Navistar Inc., NASA's Ames Research Center and the U.S. Air Force to develop and test devices for reducing the aerodynamic drag of semi-trucks. These simple devices, designed by considering the tractor and trailer as an integrated system, and also taking into account operational requirements, increase fuel efficiency by as much as 12 percent and could prevent 36 million tons of carbon dioxide from being released into the atmosphere annually ( if these devices were deployed across the U.S. trucking fleet), roughly the same amount of CO2 that is emitted from four 1-gigawatt power plants every year.
Similarly our expertise in computer simulations will be an integral part of a new national effort in energy efficient building research. Buildings are modeled component by component, room by room, usually without people moving in and out of them. Considering all these facts together as an integrated system and modeling the dynamics of that system between day and night, along with seasonal variation is a complex, multi physics calculation, requiring a supercomputer. In other examples, we model the regional effects of global climate change on water and agriculture, we study the structural response of buildings and bridges to earthquakes, look into the basic building blocks of matter, simulate the response of bacteria to antibiotics, understand how to extend the life of nuclear power plants and do fundamental research - to name a few.
Being No. 1 on the Top500 list is exciting, don't get me wrong, but what really drives us at Livermore Lab is the problem we're trying to solve. In fact, our goal is not to be No. 1 but to be able to do the calculations we need to do to accomplish our mission. Back to stockpile stewardship described earlier. In the early 1990s, when President Bush outlawed underground nuclear tests to assess the stockpile, the three NNSA labs did a back of the envelope calculation that determined we would need 100TF to do a full physics, full system calculation at a certain resolution and not take more than one month to run the calculation on the whole machine.
It didn't matter that at the time no one even had a computer capable of a single teraflop, or that the Top500 was barely in existence. This kind of mission driver sharpens one's focus and compels one to make progress in a real way, progress on the whole ecosystem. So when I talk about being leaders in HPC I mean we, the U.S., own the intellectual property for all aspects of that ecosystem, and today Feb 23, 2011 the U.S. is the undisputed world leader in HPC hardware, software and applications. I cite two reports that document that claim, the National Research Council's 2004 "Getting Up to Speed, The Future of Supercomputing," and the National Science and Technology Council's 2006 "High End Computing Revitalization Task Force."
Therefore my second sound bite in response to our question is: No, China has not surpassed the U.S. in supercomputing but we have good reason to look over our shoulder, and here's why.
While the Chinese Tiahne-1A computer system is based on U.S. components, it uses many Chinese-made components and China's goal is to be able to build supercomputers entirely based on indigenous technology. China has made consistent HPC investments over the last 10 years that resulted in its current position in HPC, and they have plans for the future. China has put next generation exascale computing, using indigenous technology, on a fast track, planning to have systems available by the end of this decade (2020). In the next five-year plan, China expects to have many petascale systems, at least one of which will be 100PF. They have budgeted $2 billion for this. Their following five-year plan from 2016 to 2020 takes them to between 1 and 10 exaflops. They have not published this budget. These facts and figures were taken from a talk given by Xue-bin Chi from the Supercomputing Center, Computer Network Information Center, of the Chinese Academy of Sciences. As indicated, an important piece of information in their plans is the fact that they are pursuing their own line of processors. It is based on the Godson-3 design, commercially called Loongson. This technology will be in their Dawning computer, which is expected to deliver a petaflop later this year (in 2011).
There has always been international competition in HPC, but 2011 is different from the past. We are currently experiencing a significant technology disruption, there is a different climate of global economic competitiveness and these two coupled with the fact that the U.S. does not have a funded comprehensive plan to address the technology transition to exascale provides an opportunity for others to leapfrog the U.S.
Briefly, the technology disruption is this. The traditional exponential clock rate growth that has given us our 2x performance improvement every 18-24 months over the last 15 years has ended. Now instead of increasing the clock frequency on a chip we are doubling the number of cores on a chip. Multi-core computer systems come in many varieties each with their own challenges regarding how to utilize the order million or potentially billion cores all on a single problem, let alone get the operating system, performance tools, debuggers, etc to work across that much parallelism. An additional problem to that level of parallelism is reliability and resiliency. And finally, as we gang more and more cores together, the electricity required to power the system becomes staggering. In fact, it becomes the overarching technology challenge to solve for reaching the next level of supercomputing capability -- the exascale. These challenges are daunting and require a sustained, focused R&D effort with the best minds we can find in industry, academia and the national labs.
As I said earlier, the U.S. does not have a funded comprehensive plan to address this technology transition. We do have a DOE joint Office of Science and National Nuclear Security Administration exascale initiative that has broad community support. There have been numerous scientific grand challenge workshops to understand the drivers for exascale computing. (climate science, high-energy physics, nuclear physics, fusion energy, nuclear energy, biology, materials science, chemistry, national security and cross-cutting technologies) There is an exascale steering committee consisting of two representatives each, from the six major DOE labs in HPC, and DOE is leading the international exascale software project, recognizing the need to collaborate where possible on the many challenges facing us in the changing technology. What we don't have is funding, at a time when Congress is trying to trim the budget.
The relationship of science to economic prosperity and competitiveness is clearly understood. Nonetheless trends are not encouraging. Over the four decades from 1964 to 2004, our government's support of science declined 60 percent as a portion of GDP. Meanwhile, other countries aren't holding back: China has a defined and published plan for supercomputing. China is now the world leader in investing in clean energy, which will surely be one of the industries of the future. Overall, China invested $34.6 billion in the energy sector in 2009; the U.S. invested $18.6 billion. China's peer-reviewed S&T publications rate in the open literature surpassed that of the U.S. in 2007.
There's also trouble on the education front. In January, the Washington Post reported that: U.S. undergraduate institutions award 16 percent of their degrees in the natural sciences or engineering; China awards 47 percent. America ranks 27th among developed nations in the proportion of students receiving undergraduate degrees in science or engineering. On a per-student basis, state support of public universities has declined for more than two decades and was at the lowest level in a quarter-century before the current economic downturn.
I earlier referred to America's "Sputnik Moment" which occurred in 1957. That moment galvanized support for science and technology and the education needed to make that S&T possible. In 1958, Congress established NASA, which led not only to men on the moon, but also to huge breakthroughs in computers, building materials and other technologies. The same year lawmakers passed the National Defense Education Act, which increased federal investment in education nearly six-fold.
Here I have to quote David Gergen, senior political analyst at CNN and director for public leadership at Harvard Kennedy School from his July 28, 2010 Commonwealth Club presentation, "A sage once said that America is excellent when we have a wolf at the door; we are pretty terrible when we have termites in the basement." Supercomputing may lack the glamour of the space race, but failing to meet the challenge before us has consequences that reach much further and more broadly into our economic future. Supercomputers have become a differentiating tool for discovery and innovation, with profound impacts on science, national security and industrial competiveness. In its 2008 report "The New Secret Weapon," the U.S. Council on Competitiveness said: "Supercomputing is part of the corporate arsenal to beat rivals by staying one step ahead on the innovation curve. It provides the power to design products and analyze data in ways once unimaginable." The bottom line, according to the council, is that to "out compete, you need to out compute."
As we look to the future, supercomputing is at a crossroad. Merely refining and scaling up current technology cannot develop the next generation of exascale supercomputers. Today's top petaflop systems have nearly reached the design limits of current HPC technology. A sustained R&D effort will be required to overcome the huge technical challenges to exascale computing. We will need to reduce five to ten-fold the electricity required to power supercomputers. Without this reduction, exascale computers would need hundreds of megawatts -- enough to power a small city -- at an unacceptable cost of hundreds of millions of dollars a year to pay the electricity bill. In addition, computers at this scale will need "self-awareness" to overcome the failures inevitable in machines with millions or billions of components, performing quintillions of simultaneous tasks. Needless to say, the development and then later integration of these new technologies into consumer products such as cell phones and laptops represents a tremendous economic opportunity for the nations that bring them to market.
For some people the numbers associated with supercomputing are mind boggling - trillions and quadrillions of operations per second, moving to quintillions. It's not unreasonable to ask just how much computing power is needed. The simple answer: when we have enough computing capability to provide a digital proxy of the entire universe, we will be done. That's a long way off. There are many problems close at hand that are limited by today's supercomputers. The modeling and simulation of climate change is an example. We've learned much from climate modeling, but these efforts are still rudimentary, running coupled systems to simulate the atmosphere, ocean, sea ice or land surface. Today's supercomputers simply don't have the capacity to include all the physics needed to conduct high-resolution, integrated simulations of climate change on a global scale. Like climate modeling, each scientific domain has problems awaiting the arrival of exascale systems and beyond.
To summarize, we're entering this new supercomputing era as the global leader in HPC. But, our leadership is being threatened. Unlike the past, U.S. leadership faces an unprecedented challenge, principally from China, but also other global players. China is steadily increasing its investment and marshaling the technological capabilities to quickly develop next-generation supercomputers. They clearly understand the value of this investment. We in the U.S., on the other hand, are working to formulate an investment strategy in the face of budget-cutting political pressures. We need to understand we are in a global competition with determined rivals. Competition is generally a good thing; it makes everyone better and sparks innovation. But we need to make a commitment to compete with equal resources and determination. My own personal belief has to do with equality. When I'm competing I want to do so with someone who's just about my equal. It's no fun to play with someone a lot better than me, or to play with someone who's well behind me. Not only is it not fun, it doesn't help me improve. So it's not that I want to beat China per se; it's that I want us to have parity with them. I don't want to rely on them for the chip technology embedded in the supercomputers we use for national security. I don't want to rely on them for the low level software that runs my supercomputer because they figured out the parallelism before we did. I don't want to rely on them, or anyone else, for my own standard of living, for my safety and security, for the inventions that propel us forward, for open dialog and communications, all of which rely on supercomputing. I want the U.S. to be self reliant, capable and responsible for our own prosperity.
If we are to be partners in a world of global competition, I want us to come from a position of strength based on the best U.S. industry, academia and the national labs have to offer. That's what put us and has kept us in the leadership role we enjoy today in supercomputing. It's imperative we now begin to push forward on the necessary technology to ensure a continued leadership position. The stakes are very high.