Compute on Demand Drives Efficiencies in Oil and Gas Industry

By Tyler O’Neal -- Appro and CyrusOne’s compute on demand (COD) center in Houston, Texas are helping companies use high-performance computing with a return on technology investment while ensuring application availability, data security and superior network performance. Customers contracting for flexible computing pay for only the amount of capacity reserved for the duration of the contract period with minimum financial and technical risk. In addition, they can tap into a scalable, secure and resilient on-demand operating environment while taking advantage of service and support. To learn more, Supercomputing Online interviewed Maria McLaughlin, Senior Marketing Director of Appro at SC06. Appro's Maria McLaughlin (left), Senior Marketing Director and Daniel Kim (right), CEO in the Appro booth at SC06.
Supercomputing Online: CyrusOne and Appro announced a partnership to provide high performance computing services to the oil and gas industry. Please share the highlights with us. McLaughlin: Backdropping the oil industry’s historical supply/demand fluctuations, every upswing typically brings new challenges as well as open-ended opportunities. As shareholders of super major oil companies evaluate the balance between production and reserves, increased consumption continues to exert pressure on these companies to keep up the pace. As a result, seismic data processors are finding themselves dealing with an abundance of work. As companies scramble to identify and prove out untapped reserves, geophysical service companies are both interpreting new data and re-interpreting old data, virtually in a fast-forward mode. Supercomputing Online: Please tell us about the new geophysical environment. McLaughlin: Primarily what separates this cycle from previous ones is a technological leap, increasing the number of geophysical electronic data files. Further, data re-interpretation is occurring due to the development of customized software that has been optimized or, in field jargon, “Now you can read more, see more and hear more.” Data has become a real growth industry. As a result, many leases are being re-evaluated as today’s technology is more than 1,000 times more granular than even five or ten years ago. Therefore, large volumes of seismic data are both being processed for the first time as well as being re-processed in an effort to increase reserves. However, this is precisely where a major problem crops up. Seismic shops can only run as many jobs as they have the capacity within their IT infrastructure. This means that critical seismic information may not be available to the geologist on time, or at all. To an oil producer, unprocessed data could mean a delay in the recognition of reserves. Supercomputing Online: How did COD evolve? McLaughlin: It became clear that an answer to the data capacity question was needed in order for data processing to become predictable and continuous - hence computing-on-demand (COD). Its advent allowed companies to perform reservoir modeling, seismic processing and run simulations during times of high demand related to exploration or other business-driven activities. But, COD did not enter the marketplace without some bumps in the road. Oil companies were faced with building out infrastructure in order to process the abundance of seismic data that inundated their geological teams. As more companies needed secure and reliable high performance computing, providing a core infrastructure became a viable business. It was necessary to have a critical mass of infrastructure in order to run the types of applications and jobs required in the geophysical field, and that was not usually the case with most companies. A second issue involved addressing the provisioning or the actual set-up. One of the challenges in the Linux world is that both software and performance has been optimized for all types of hardware. Companies created customized software to run only on specific pieces of hardware. In the company’s own controlled and standardized environment, that posed little problem. However, when a third party was brought in, other issues arose such as ensuring the appropriate operating system was available and that the software could actually run on a particular platform. While a major cost component for COD was incurred internally for equipment procurement, an even larger cost centered on both the operation and hosting of the equipment. Supercomputing Online: What was the solution? McLaughlin: A new service offering needed to be developed to complement and enhance the existing COD offering. The purpose was multiple; help with customization and provisioning so that a job could be quickly installed, up and running and “cleaned off” or removed in a secure manner. By doing that expeditiously, another customer or application could come in and take advantage of the infrastructure. In addition, High performance computing users gain a significant competitive advantage by using compute on demand services, which eliminates the total cost of ownership for a new capital asset acquisition or modification to their data center in order to accommodate the increased cooling and power costs associated with blade technology. CyrusOne, an industry leader in data center services, partnered with Appro, a well-known server and storage cluster solutions provider, to develop a new model for COD that meets the needs of customers. Supercomputing Online: How does it work? McLaughlin: This new technique addresses the traditional disadvantages of COD – infrastructure, provisioning and costs. Instead of an oil company spending capital dollars on building an infrastructure to support data processing, the company leverages an existing infrastructure of a data center. By opting to use this economic model, companies are able to cut costs associated with provisioning and equipment purchases that would traditionally depreciate over three years. A spike in data processing needs is affordable and no longer a problem as customers pay for what they use, as they go – “on-demand.” Overall, seismic data can be immediately interpreted without a major capital investment in hardware, software or more data center space. From a business perspective as well as a technological one, this new model makes considerably better sense. Even in times of $70 per barrel oil, seismic data processing groups are still understandably cost-conscious and project-centric. This model lets them focus on software and their core skill of interpretation, while using clusters at a mega center on an “as needed” basis. In 2005, Appro and CyrusOne tested this new model with a major oil and gas company during a pilot. The oil company simply wanted to keep up with project workloads, while not having to expand hardware acquisitions and build more data centers. Based on customer’s requirements, CyrusOne and Appro created the external infrastructure for the customer. That is when the pilot testing turned to a contracting business. Using Appro’s blade technology and CyrusOne’s computing service, this major oil and gas company contracted to tap into 2000 cluster nodes over the next three months to perform reservoir modeling, seismic processing and run simulations during times of high-demand related to exploration or other business-driven activities. Supercomputing Online: Is there anything else you would like to add? McLaughlin: Oil and gas customers have become more cautious, realizing that extra-high oil prices will not likely last forever. The ability to reduce COD costs, even by a few cents, has a significant impact on their bottom line. Consider the time issues that have contributed to acceptance of COD. Approximately twelve months are required to build a data center, followed by six months to provision, install, set up equipment and then go live operationally. If COD were still in the conceptual stage, many companies might still take a “wait and see” approach. This successful customer experience using COD, validated it as a viable business model for seismic data processing and interpretation that companies of all sizes can take to the bank. Supercomputing Online wishes to thank Maria McLaughlin for her time and insight. Appro, is a provider of high-performance enterprise computing systems, Appro is headquartered in Milpitas, CA., sales/service office in Houston, TX and with a research and development manufacturing center in Asia. To learn more about Appro’s blade computing clusters, go to Appro’s website or download the latest Appro’s white papers at its Web site. CyrusOne (its Web site), is a data center and provider of COD services primarily to the oil and gas industry. For more information on this solution, visit Appro’s booth 1634 at the SC06 show, November 11-17 2006 in Tampa. Representatives from Appro will be available for questions.