I just came back from the annual Oil and Gas High Performance Computing (HPC) Workshop hosted by Rice University and can say the event has grown rapidly to where it has become a broad investigation of the use and role of HPC in the Oil & Gas segment. In terms of attendance, the conference impressed with more than 300 registrants.
For the first time, it also featured “Lightning Talks”; those rapid-fire, few slides, special effects, big font size presentations so beloved by the West Coast/Silicon Valley computer industry. (Perhaps next year, we’ll see some VCs showing up? The HPC industry in Oil & Gas is certainly big enough and innovating dramatically enough to warrant their presence).
At such events, it’s customary to focus on the keynote and plenary session presentations — usually, a fair mixture of speakers from industry, HPC vendors, and national labs alike, reviewing current state of the art, discussing major trends from the last few years and laying out plans and developments for the future — and the workshop did not disappoint at all in this regard: David Bernholdt from ORNL drew attention to programming challenges at extreme scale, Randall Rheinheimer from LANL gave an overview of the broad spectrum of DOE R&D programs in preparation for extreme scale, while a forward-looking industry perspective was given by Peter Breunig from Chevron. It was all capped off by Bill Dally from Nvidia who made a plea for innovations in system architecture and circuit design to overcome the end of Dennard scaling in semiconductor process technology.
In this short space, however, I would rather discuss Application Session II which featured three talks in a row (“An HPC Platform for Simulation of Giant Reservoir Models,” presented by Majdi Baddourah of Saudi Aramco; “The Impact of Discontinuous Coefficients and Partitioning on Parallel Reservoir Simulation Performance”; presented by Jonathan Graham of ExxonMobil; and “Strong Scalability of Reservoir Simulation on MPP Computers: Issues and Results” presented by Vadim Dyadechko of ExxonMobil) that combined for a pleasantly surprising, perfectly executed 1-2-3 pitch for the use and role of HPC in reservoir simulations. Talk 1 introduced GigaPOWERS, Saudi Aramco’s simulator that is capable of simulating multibillion(!) cell reservoir models, with particular emphasis on the requirements imposed by its integration in the complex data and workflow processes of reservoir management. Talk 2 featured much smaller models but with complex geometries and sophisticated iterative solvers. It clearly demonstrated the importance of transmissibility-weighted graph partitioners to reduce and “stabilize” the (surprisingly large) possible variation of number of iterations – with direct impact on performance. Finally, Talk 3 reviewed parallelization strategies of the algorithm: proper data structures and data layout, parallel direct and iterative solvers, and parallel pre conditioners to achieve load balancing and minimization of communication between processors. Even though most operations are memory-bandwidth bound, it showed how to get excellent parallel efficiency to several 1000s of cores.
What these three presentations mean for HPC use in oil and gas
Each of these presentations is noteworthy on its own, but the overarching message is even more important — that the oil and gas sector has experienced meaningful innovation in HPC use with major problems being solved in areas ranging from model capabilities, algorithms and system balance. The end results are clear use cases illustrating the high impact of supercomputing systems across the entire oil and gas sector.
Geert Wenes, Segment Marketing Manager