GOVERNMENT
Carnegie Mellon researchers save electricity with low-power processors and flash memory
- Written by: Tyler O'Neal
- Category: GOVERNMENT
Researchers at Carnegie Mellon University and Intel Labs Pittsburgh (ILP) have combined low-power, embedded processors typically used in netbooks with flash memory to create a server architecture that is fast, but far more energy efficient for data-intensive applications than the systems now used by major Internet services.
An experimental computing cluster based on this so-called Fast Array of Wimpy Nodes (FAWN) architecture was able to handle 10 to 100 times as many queries for the same amount of energy as a conventional, disk-based cluster. The FAWN cluster had 21 nodes, each with a low-cost, low-power off-the-shelf processor and a four-gigabyte compact flash card. At peak utilization, the cluster operates on less energy than a 100-watt light bulb.
The research team, led by David Andersen, Carnegie Mellon assistant professor of computer science, and Michael Kaminsky, senior research scientist at ILP, received a best paper award for its report on FAWN at the Association for Computing Machinery's annual Symposium on Operating Systems Principles Oct. 12 in Big Sky, Mont.
A next-generation FAWN cluster is being built with nodes that include Intel's Atom processor, which is used in netbooks and other mobile or low-power applications.
Developing energy-efficient server architectures has become a priority for datacenters, where the cost of electricity now equals or surpasses the cost of the computing machines themselves over their typical service life. Datacenters being built today require their own electrical substations and future datacenters may require as much as 200 megawatts of power.
"FAWN systems can't replace all of the servers in a datacenter, but they work really well for key-value storage systems, which need to access relatively small bits of information quickly," Andersen said. Key-value storage systems are growing in both size and importance, he added, as ever larger social networks and shopping Web sites keep track of customers' shopping carts, thumbnail photos of friends and a slew of message postings.
Flash memory is significantly faster than hard disks and far cheaper than dynamic random access memory (DRAM) chips, while consuming less power than either. Though low-power processors aren't the fastest available, the FAWN architecture can use them efficiently by balancing their performance with input/output bandwidth. In conventional systems, the gap between processor speed and bandwidth has continually grown for decades, resulting in memory bottlenecks that keep fast processors from operating at full capacity even as the processors continue to draw a disproportionate amount of power.
"FAWN will probably never be a good option for challenging real-time applications such as high-end gaming," Kaminsky said. "But we've shown it is a cost-effective, energy efficient approach to designing key-value storage systems and we are now working to extend the approach to applications such as large-scale data analysis."
The work was supported in part by gifts from Network Appliance, Google and Intel Corp., and by a grant from the National Science Foundation. In addition to Andersen and Kaminsky, the research team included Ph.D. computer science students Jason Franklin, Amar Phanishayee and Vijay Vasudevan, and graduate student Lawrence Tan.
An experimental computing cluster based on this so-called Fast Array of Wimpy Nodes (FAWN) architecture was able to handle 10 to 100 times as many queries for the same amount of energy as a conventional, disk-based cluster. The FAWN cluster had 21 nodes, each with a low-cost, low-power off-the-shelf processor and a four-gigabyte compact flash card. At peak utilization, the cluster operates on less energy than a 100-watt light bulb.
The research team, led by David Andersen, Carnegie Mellon assistant professor of computer science, and Michael Kaminsky, senior research scientist at ILP, received a best paper award for its report on FAWN at the Association for Computing Machinery's annual Symposium on Operating Systems Principles Oct. 12 in Big Sky, Mont.
A next-generation FAWN cluster is being built with nodes that include Intel's Atom processor, which is used in netbooks and other mobile or low-power applications.
Developing energy-efficient server architectures has become a priority for datacenters, where the cost of electricity now equals or surpasses the cost of the computing machines themselves over their typical service life. Datacenters being built today require their own electrical substations and future datacenters may require as much as 200 megawatts of power.
"FAWN systems can't replace all of the servers in a datacenter, but they work really well for key-value storage systems, which need to access relatively small bits of information quickly," Andersen said. Key-value storage systems are growing in both size and importance, he added, as ever larger social networks and shopping Web sites keep track of customers' shopping carts, thumbnail photos of friends and a slew of message postings.
Flash memory is significantly faster than hard disks and far cheaper than dynamic random access memory (DRAM) chips, while consuming less power than either. Though low-power processors aren't the fastest available, the FAWN architecture can use them efficiently by balancing their performance with input/output bandwidth. In conventional systems, the gap between processor speed and bandwidth has continually grown for decades, resulting in memory bottlenecks that keep fast processors from operating at full capacity even as the processors continue to draw a disproportionate amount of power.
"FAWN will probably never be a good option for challenging real-time applications such as high-end gaming," Kaminsky said. "But we've shown it is a cost-effective, energy efficient approach to designing key-value storage systems and we are now working to extend the approach to applications such as large-scale data analysis."
The work was supported in part by gifts from Network Appliance, Google and Intel Corp., and by a grant from the National Science Foundation. In addition to Andersen and Kaminsky, the research team included Ph.D. computer science students Jason Franklin, Amar Phanishayee and Vijay Vasudevan, and graduate student Lawrence Tan.