0% found this document useful (0 votes)
54 views2 pages

Advanced Reservoir Simulation

ECHELON, a massively parallel reservoir simulator, was used to simulate a 1.01 billion cell model of a large Middle Eastern carbonate field. The simulation was run on 30 IBM POWER8 nodes with NVIDIA GPUs and completed in 92 minutes. Previous billion cell simulations using CPU-based codes required over 500 nodes and took 20 hours to complete. This case study demonstrates the significant performance advantages that GPUs and ECHELON provide for large-scale reservoir simulations, allowing users to achieve faster run-times using far fewer hardware resources.

Uploaded by

Prima Adhi Surya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views2 pages

Advanced Reservoir Simulation

ECHELON, a massively parallel reservoir simulator, was used to simulate a 1.01 billion cell model of a large Middle Eastern carbonate field. The simulation was run on 30 IBM POWER8 nodes with NVIDIA GPUs and completed in 92 minutes. Previous billion cell simulations using CPU-based codes required over 500 nodes and took 20 hours to complete. This case study demonstrates the significant performance advantages that GPUs and ECHELON provide for large-scale reservoir simulations, allowing users to achieve faster run-times using far fewer hardware resources.

Uploaded by

Prima Adhi Surya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Case Study

High Performance Reservoir Simulation

Billion Cell Simulation


ECHELON enables the modeling and simulation of giant
conventional reservoirs

1 1.5 30
billion cells hours IBM Power
nodes

ECHELON is one of the most disruptive technologies I’ve


seen in my career doing simulation. It has proven ability
to rapidly run very large, multi-million cell, full-physics
models using massive parallelism. For iReservoir, this has
lead to improved understanding of complex systems by
allowing for broad ranging sensitivity analysis in vastly
reduced time frames.

Dr. Jim Gilman, iReservoir Inc.


Case Study: Billion Cell Simulation
Challenge
For the vast majority of simulation cases a billion cells is several orders of magnitude larger than is commonly used. At this
“hero-scale” the main goal is to stress-test simulators and to show off capability. Typical reservoir models used in the industry range
in size from a few hundred thousand to a few million cells. If we think of one of these models as a standard definition HDTV, then the
billion-cell example would have 250 times the resolution of a 4K television. It gives enormous resolution and clarity, but comes with
high computational cost. The purpose of our billion-cell calculation was to highlight ECHELON’s capabilities and the efficiencies that
GPU computing offers. With help from colleagues at iReservoir, we created a model using publicly available log data collected from
a large Middle East carbonate field. We built a three-phase model with 1.01 billion cells and 1,056 wells.

Results
We simulated this model in 92 minutes on 30 IBM POWER8 nodes
each with 4 NVIDIA TESLA P100 GPUs (Figure 1). In contrast,
previous billion cell calculations used over 500 nodes and took 20
hours. The calculation and the results powerfully illustrate i) the
capability of GPUs for large scale physical modeling ii) the performance
advantages of GPUs over CPUs and iii) the efficiency and density of
Figure 1. IBM Power NVLINK server with 4 NVIDIA Tesla P100 solution that GPUs offer.
GPUs used for running ECHELON.

Benefits
ECHELON is a massively parallel, fully-implicit, extended black-oil reservoir simulator built from inception to take full advantage of
the fine-grained parallelism and massive compute capability offered by modern Graphical Processing Units (GPUs). These GPUs
provide a dense computing platform with ultrahigh memory bandwidth and extreme arithmetic throughput. Massively parallel
GPU hardware, modern solver algorithms and careful implementation are combined in ECHELON to enable efficient simulation
from hundreds of thousands to billions of cells. This is accomplished at speeds that enable the practical simulation of hundreds
of realizations of large complex models in vastly less time, all while using far fewer hardware resources than CPU based solutions.
The principle conclusion we draw from our results is that ECHELON used an order of magnitude fewer server nodes and two
orders of magnitude fewer domains to achieve an order of magnitude greater calculation speed than those reported by analo-
gous CPU based codes.

By running ECHELON on IBM Power Systems, users can


achieve faster run-times using a fraction of the hardware.
The previous record used more than 700,000 processors
in a supercomputer installation that occupies nearly half
a football field. Stone Ridge did this calculation on two
racks of IBM Power Systems machines that could fit in the
space of half a ping-pong table.
Sumit Gupta, IBM

[email protected]
www.stoneridgetechnology.com

You might also like