Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption.

"The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. On the other hand, there is an explosion in digital data searched, interpreted, and mined by data centers."

A growing number of applications are designed to tolerate "noisy" real-world inputs and use statistical or probabilistic types of computations.

"The nature of these computations is different from the traditional computations where you need a precise answer," said Srimat Chakradhar, department head for Computing Systems Architecture at NEC Laboratories America, who collaborated with the Purdue team. "Here, you are looking for the best match since there is no golden answer, or you are trying to provide results that are of acceptable quality, but you are not trying to be perfect."

However, today's computers are designed to compute precise results even when it is not necessary. Approximate computing could endow computers with a capability similar to the human brain's ability to scale the degree of accuracy needed for a given task. New findings were detailed in research presented during the IEEE/ACM International Symposium on Microarchitecture, Dec. 7-11 at the University of California, Davis.

The inability to perform to the required level of accuracy is inherently inefficient and saps energy.

"If I asked you to divide 500 by 21 and I asked you whether the answer is greater than one, you would say yes right away," Raghunathan said. "You are doing division but not to the full accuracy. If I asked you whether it is greater than 30, you would probably take a little longer, but if I ask you if it's greater than 23, you might have to think even harder. The application context dictates different levels of effort, and humans are capable of this scalable approach, but computer software and hardware are not like that. They often compute to the same level of accuracy all the time."

Purdue researchers have developed a range of hardware techniques to demonstrate approximate computing, showing a potential for improvements in energy efficiency.

The research paper presented during the IEEE/ACM International Symposium on Microarchitecture was authored by doctoral student Swagath Venkataramani; former Purdue doctoral student Vinay K. Chippa; Chakradhar; Kaushik Roy, Purdue's Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering; and Raghunathan.

Recently, the researchers have shown how to apply approximate computing to programmable processors, which are ubiquitous in computers, servers and consumer electronics.

"In order to have a broad impact we need to be able to apply this technology to programmable processors," Roy said. "And now we have shown how to design a programmable processor to perform approximate computing."

The researchers achieved this milestone by altering the "instruction set," which is the interface between software and hardware. "Quality fields" added to the instruction set allow the software to tell the hardware the level of accuracy needed for a given task. They have created a prototype programmable processor called Quora based on this approach.

"You are able to program for quality, and that's the real hallmark of this work," lead author Venkataramani said. "The hardware can use the quality fields and perform energy efficient computing, and what we have seen is that we can easily double energy efficiency."

In other recent work, led by Chippa, the Purdue team fabricated an approximate "accelerator" for recognition and data mining.

"We have an actual hardware platform, a silicon chip that we've had fabricated, which is an approximate processor for recognition and data mining," Raghunathan said. "Approximate computing is far closer to reality than we thought even a few years ago."

The research was funded in part by the National Science Foundation. 


ABSTRACT

Quality Programmable Vector Processors for Approximate Computing     

Swagath Venkataramani† Vinay K. Chippa† , Srimat T. Chakradhar‡ , Kaushik Roy† and Anand Raghunathan†

† School of Electrical and Computer Engineering, Purdue University

‡ Systems Architecture Department, NEC Laboratories America

† {venkata0,vchipp,kaushik,raghunathan}@purdue.edu, ‡ chak@nec-labs.com

Approximate computing leverages the intrinsic resilience of applications to inexactness in their computations, to achieve a desirable trade-off between efficiency (performance or energy) and acceptable quality of results. To broaden the applicability of approximate computing, we propose quality programmable processors, in which the notion of quality is explicitly codified in the HW/SW interface, i.e., the instruction set. The ISA of a quality programmable processor contains instructions associated with quality fields to specify the accuracy level that must be met during their execution. We show that this ability to control the accuracy of instruction execution greatly enhances the scope of approximate computing, allowing it to be applied to larger parts of programs. The micro-architecture of a quality programmable processor contains hardware mechanisms that translate the instruction-level quality specifications into energy savings. Additionally, it may expose the actual error incurred during the execution of each instruction (which may be less than the specified limit) back to software. As a first embodiment of quality programmable processors, we present the design of Quora, an energy efficient, quality programmable vector processor. Quora utilizes a 3-tiered hierarchy of processing elements that provide distinctly different energy vs. quality trade-off, and uses hardware mechanisms based on precision scaling with error monitoring and compensation to facilitate quality programmable execution. We evaluate an implementation of Quora with 289 processing elements in 45nm technology. The results demonstrate that leveraging quality-programmability leads to 1.05X-1.7X savings in energy for virtually no loss (< 0.5%) in application output quality, and 1.18X-2.1X energy savings for modest impact (<2.5%) on output quality. Our work suggests that quality programmable processors are a significant step towards bringing approximate computing to the mainstream. 

Capitalizing on unique deep water technology, Paradigm continues to see rapid growth in Latin America.

Paradigm has announced that Petrogal Brasil has adopted the entire suite of interpretation, visualization and modeling software from Paradigm.  As a result, the entire Petrogal Brasil installed base can carry out traditional and advanced structural and stratigraphic interpretation of the subsurface in search for hydrocarbon reservoirs. The deal with Petrogal culminates a year of rapid growth in Latin America for Paradigm. While the entire Paradigm product suite has seen broad acceptance in the region, customers like Petrogal have responded with particular interest in differentiated technologies around interpretation and modeling, including SeisEarth, VoxelGeo and SKUA.

Petrogal Brasil began using Paradigm software in 2010 for exploration and production in the Santos, Sergipe-Alagoas and Potiguar Basins.  The results have been so successful that the company decided to expand its Paradigm interpretation and modeling portfolio to cover all of its exploration and development work.

"Using the Paradigm SeisEarth interpretation suite, which includes VoxelGeo, our exploration team has been able to obtain more accurate and reliable results in some of the challenging environments where we operate," said Carlos Gonçalves, exploration manager at Petrogal Brasil. "Additionally, we have come to rely on the excellent subsurface modeling results from the Paradigm SKUA technology and the on-demand support from the Paradigm team, who partnered with us each step of the way."

"We are delighted that Petrogal Brasil has chosen Paradigm as its sole interpretation and modeling software provider," said Dayse Netto, Brazil country manager at Paradigm. "Paradigm interpretation and modeling solutions are designed to deliver more accurate results without compromising productivity, and are particularly suitable to the complex deep water plays characteristic of our region. Building on a foundation of business transparency and collaboration, our team will continue to work closely with Petrogal Brasil to ensure they obtain more information out of existing data, and more confidence in their understanding of their prospects and reservoirs in Brasil."

A team led by an Iowa State University nuclear physicist has won a grant of 204 million hours on two of the country’s fastest supercomputers to study the structure and reactions of rare and exotic nuclei.

The grant allocates 104 million hours on the Titan supercomputer (a Cray XK7 hybrid system) at the Oak Ridge Leadership Computing Facility in Tennessee and 100 million hours on the Mira supercomputer (an IBM Blue Gene/Q system) at the Argonne Leadership Computing Facility in Illinois. The machines operate at the petascale and are capable of more than a quadrillion calculations per second.

The research team’s computing time on the two computers is for the first year of a grant that can be renewed for two additional years.

The supercomputing time is one of 59 grants recently awarded by the U.S. Department of Energy’s Office of Science. The grants are part of the department’s Innovative and Novel Computational Impact on Theory and Experiment program, or INCITE. The program awarded a total of nearly 6 billion supercomputing hours.

The Department of Energy does not release the monetary value of the supercomputer time.

“There are a lot of big puzzles out there that we’d like to solve,” said James Vary, an Iowa State professor of physics and astronomy and the leader of the nine-member research team that will use all that supercomputing power. “In our case, we use nuclei as labs to understand the fundamental laws of nature.”

That understanding can have major implications for our energy future.

“Predictions for the structure and reactions of nuclei, with assessed uncertainties, are important for the future of the nation’s energy and security needs,” the researchers wrote in a summary of their project. “Such calculations are relevant to many applications in nuclear energy, nuclear security and nuclear astrophysics.”

Vary said the team’s findings will be vital for the development of nuclear energy, including advanced nuclear fission reactors and fusion energy.

The project will focus on rare isotopes, nuclei of elements that feature additional or fewer neutrons, Vary said. Oxygen, for example, has at least 17 known isotopes.

The structure and reactions of many isotopes can’t be studied in a laboratory because of their short lifetimes. And so supercomputing power is necessary to produce accurate simulations that can predict the nuclear structures, properties and reactions of the isotopes.

These aren’t easy problems: “That’s why we need 50-60 Ph.D.-level physicists working days and nights and holidays,” Vary said. “And we need supercomputers, too.”

In addition to Vary, the research team includes Pieter Maris, an Iowa State research associate professor of physics and astronomy; Joseph Carlson of Los Alamos National Laboratory in New Mexico; Gaute Hagen and Hai Ah Nam of Oak Ridge National Laboratory; Petr Navratil of  TRIUMF (Canada's national laboratory for particle and nuclear physics); Witold Nazarewicz of the University of Tennessee, Knoxville; Steven Pieper of Argonne National Laboratory and Nicolas Schunck of Lawrence Livermore National Laboratory in California.

CyrusOne’s Houston West campus delivers highest known concentration of supercompute and high-density colocation space of any multi-tenant data center company

“What we’ve created at CyrusOne’s Houston data center campus is a geophysical computing center of excellence with 31 megawatts of critical power load available for customers and an HPC area that can deliver up to 900 watts per square foot,” said Gary Wojtaszek, president and chief executive officer of CyrusOne. “This means that companies involved in oil and gas exploration research and development can deploy the latest high-performance technology they need to manage the extreme levels of computing required in processing seismic data. We also see HPC demand growing across other industries as companies seek to more efficiently collect, process, store, and report data.”

Leveraging Expertise in High-Density Data Centers

Continuing to boost the critical power load at its Houston West data center campus demonstrates CyrusOne’s commitment to remaining the preferred data center destination for the energy sector’s high-density deployments of supercomputing equipment such as servers that process extremely complex seismic data collected from around the world.

CyrusOne’s Houston West campus features what it believes is the largest known concentration of HPC and high-density data center space in the colocation market, with ultra-high-density infrastructure across more than 24,000 square feet supporting up to 900 watts per square foot. The Houston West campus also houses the first-ever enterprise HPC cloud solution for the oil and gas industry. The solution enables companies to align performance computing directly to project periods and refresh cycles—optimizing budgets, time commitments, and technology for oil and gas customers managing seismic processing demands.

The company’s deep experience with high-density data centers is also assisting customers in other industry sectors to realize the benefits of supercomputing. Customers can gain efficiencies by using supercomputing solutions when the data center is appropriately engineered to handle higher density levels.

Delivering Best-in-Class Enterprise Facilities and Robust Connectivity

CyrusOne specializes in highly reliable enterprise data center colocation solutions and engineers its facilities with the highest power redundancy (2N architecture) and the power-density infrastructure required to deliver excellent availability.

Customers also have access to the CyrusOne National IX, which marries low-cost robust connectivity with the massively scaled data centers that CyrusOne is known for by creating the first-ever data center platform that virtually links a dozen of CyrusOne’s enterprise facilities in five metropolitan markets (Dallas, Houston, Austin, San Antonio, and Phoenix).

With the CyrusOne National IX, Fortune 500 enterprises are able to implement cost-effective, multi-location data center platforms that meet the disaster recovery requirements of Sarbanes Oxley, HIPAA, PCI, NASDAQ, NYSE, and many other regulatory frameworks because they can ensure their infrastructure is protected and accessible 100 percent of the time across multiple sites located throughout the country.

CyrusOne is renowned for exceptional service and for building enduring customer relationships and high customer satisfaction levels. Customers include nine of the Fortune 20 companies and more than 125 of the Fortune 1000.

If widely adopted, the energy savings would be like taking 700,000 cars off the road every year

New super-efficient rooftop units that heat and cool commercial buildings offer significant energy and dollar savings, say scientists at the Department of Energy's Pacific Northwest National Laboratory. They found that the devices reduce energy costs an average of about 41 percent compared to units in operation today.

The newly published report analyzes the operation of the commercial rooftop HVAC unit known as the Daikin Rebel, which was one of two units to meet DOE's Rooftop Challenge, a competition for manufacturers to create a rooftop unit that significantly exceeds existing DOE manufacturing standards. Daikin Applied was the first to produce such a unit, which was studied in depth by PNNL researchers; Carrier Corp. also met the challenge. The work is part of a broader DOE program known as the DOE Rooftop Campaign, which promotes the adoption of efficient rooftop units.

The PNNL study, done by scientists Srinivas Katipamula and Weimin Wang, is an in-depth look at the performance of the Rebel compared to other rooftop units in use today. The devices are usually nestled on building roofs, far from view but crucial to our comfort. The devices demand a significant proportion of the 18 quadrillion BTUs of energy that the nation's commercial buildings swallow every year.

The PNNL team estimates that if current rooftop units were replaced with devices similar to the Rebel over a 10-year period, the benefits in terms of energy saved and reduced pollution would be about equal to taking 700,000 cars off the road each year. Put another way, the reduced energy draw could idle about eight average-size coal-fired power plants in each of those 10 years.

If all rooftop units with a cooling capacity of 10 to 20 tons were replaced immediately, DOE officials estimate the cost savings at around $1 billion annually.

"There are great gains waiting to be made in energy savings, using technologies that exist today," said Katipamula, whose study was supported by DOE's Office of Energy Efficiency and Renewable Energy.

Katipamula and Wang ran extensive simulations analyzing the Rebel's performance compared to other rooftop units. The pair used DOE's Energy Plus building energy simulation software and worked with detailed performance data supplied by Intertek of Cortland, N.Y., which tested the units in the laboratory. Katipamula and Wang also created several new computer models — a necessary step because they were testing technology that has never existed before. The Rebel includes variable-speed fans and a variable-speed compressor, which allow it to respond more precisely to conditions inside a building than conventional technology.

The team ran simulations for a typical 75,000-square-foot big-box store in three cities: Chicago, Houston, and Los Angeles. They compared performance of the Rebel to three types of units: those in use today, those that meet current federal regulations for new units, and those that meet more stringent requirements, known as ASHRAE 90.1-2010 standards. DOE designed the Rooftop Challenge to exceed the ASHRAE standards.

The team found that the Rebel reduced energy costs and use as follows:

  • Compared to units in operation today that are ready for replacement, energy costs were 33 percent less in Chicago, 44 percent less in Houston, and 45 percent less in Los Angeles. The Rebel slashed energy demand by 15 percent, 37 percent, and 36 percent, respectively.
  • Compared to new units that meet current federal regulations, costs were cut 29 percent, 37 percent, and 40 percent, in Chicago, Houston, and Los Angeles, respectively. Likewise, energy demand was reduced 12 percent, 30 percent, and 32 percent in those three cities.
  • As expected, savings were a bit less when compared with new units that meet today's strictest ASHRAE standards. Costs to run the Rebel system were 15 percent lower in Chicago, 27 percent lower in Houston, and 18 percent lower in Los Angeles. Energy demand was 8 percent, 23 percent, and 15 percent lower, respectively.

While the cost of the unit was not part of the team's analysis, Katipamula estimates it would take at least a few years for the latest technology to pay back the increased investment in the newer units. The team's analysis did not include a look at some of the unit's additional features, such as its potential to save energy used for heating.

"The savings depend very much on the particular conditions — the climate, the size of the store, the materials used in the construction, and so on," said Katipamula. "We've developed Energy Plus software models that allow building designers or owners to calculate for themselves the cost-effectiveness of installing a newer unit that meets the DOE rooftop challenge."

While "cost-effectiveness" might seem to break down into simple dollars and cents, that is often not the case for commercial buildings. Payback differs dramatically depending on whose investment is at stake; oftentimes, tenants pay the energy costs, and they often have no choice in what equipment a building owner or builder chooses to use. Katipamula says that offering incentives to builders is one way to increase their stake in using cost-efficient equipment.

The work by Katipamula and Wang shows how PNNL scientists and engineers have a strong but hidden hand in how buildings and homes throughout the nation are built. Their research helps provide the foundation for universally used commercial building codes - what energy savings are possible and practical, for instance, and what should be expected from buildings and builders in the future.

Page 5 of 7