Iterative Stencil Loops Explained

Iterative Stencil Loops (ISLs) or Stencil computations are a class of numerical data processing solution[1] which update array elements according to some fixed pattern, called a stencil.[2] They are most commonly found in computer simulations, e.g. for computational fluid dynamics in the context of scientific and engineering applications. Other notable examples include solving partial differential equations,[1] the Jacobi kernel, the Gauss–Seidel method,[2] image processing[1] and cellular automata.[3] The regular structure of the arrays sets stencil techniques apart from other modeling methods such as the Finite element method. Most finite difference codes which operate on regular grids can be formulated as ISLs.

Definition

ISLs perform a sequence of sweeps (called timesteps) through a given array.[2] Generally this is a 2- or 3-dimensional regular grid.[3] The elements of the arrays are often referred to as cells. In each timestep, all array elements are updated.[2] Using neighboring array elements in a fixed pattern (the stencil), each cell's new value is computed. In most cases boundary values are left unchanged, but in some cases (e.g. LBM codes) those need to be adjusted during the computation as well. Since the stencil is the same for each element, the pattern of data accesses is repeated.[4]

(I,S,S0,s,T)

with the following meaning:[3]

I=

k
\prod
i=1

[0,\ldots,ni]

is the index set. It defines the topology of the array.

S

is the (not necessarily finite) set of states, one of which each cell may take on on any given timestep.

S0\colon\Zk\toS

defines the initial state of the system at time 0.

s\in

l
\prod
i=1

\Zk

is the stencil itself and describes the actual shape of the neighborhood. There are

l

elements in the stencil.

T\colonSl\toS

is the transition function which is used to determine a cell's new state, depending on its neighbors.

Since I is a k-dimensional integer interval, the array will always have the topology of a finite regular grid. The array is also called simulation space and individual cells are identified by their index

c\inI

. The stencil is an ordered set of

l

relative coordinates. We can now obtain for each cell

c

the tuple of its neighbors indices

Ic

Ic=\{j\mid\existsx\ins:j=c+x\}

Their states are given by mapping the tuple

Ic

to the corresponding tuple of states

Ni(c)

, where

Ni\colonI\toSl

is defined as follows:

Ni(c)=(s1,\ldots,sl)withsj=Si(Ic(j))

This is all we need to define the system's state for the following time steps

Si+1\colon\Zk\toS

with

i\in\N

:

Si+1(c)=\begin{cases}T(Ni(c)),&c\inI\\ Si(c),&c\in\Zk\setminusI\end{cases}

Note that

Si

is defined on

\Zk

and not just on

I

since the boundary conditions need to be set, too. Sometimes the elements of

Ic

may be defined by a vector addition modulo the simulation space's dimension to realize toroidal topologies:

Ic=\{j\mid\existsx\ins:j=((c+x)\mod(n1,\ldots,nk))\}

This may be useful for implementing periodic boundary conditions, which simplifies certain physical models.

Example: 2D Jacobi iteration

To illustrate the formal definition, we'll have a look at how a two dimensional Jacobi iteration can be defined. The update function computes the arithmetic mean of a cell's four neighbors. In this case we set off with an initial solution of 0. The left and right boundary are fixed at 1, while the upper and lower boundaries are set to 0. After a sufficient number of iterations, the system converges against a saddle-shape.

\begin{align} I&=[0,\ldots,99]2\\ S&=\R\\ S0&:\Z2\to\R\\ S0((x,y))&=\begin{cases} 1,&x<0\\ 0,&0\lex<100\\ 1,&x\ge100 \end{cases}\\ s&=((0,-1),(-1,0),(1,0),(0,1))\\ T&\colon\R4\to\R\\ T((x1,x2,x3,x4))&=0.25(x1+x2+x3+x4) \end{align}

Stencils

The shape of the neighborhood used during the updates depends on the application itself. The most common stencils are the 2D or 3D versions of the von Neumann neighborhood and Moore neighborhood. The example above uses a 2D von Neumann stencil while LBM codes generally use its 3D variant. Conway's Game of Life uses the 2D Moore neighborhood. That said, other stencils such as a 25-point stencil for seismic wave propagation[5] can be found, too.

Implementation issues

Many simulation codes may be formulated naturally as ISLs. Since computing time and memory consumption grow linearly with the number of array elements, parallel implementations of ISLs are of paramount importance to research.[6] This is challenging since the computations are tightly coupled (because of the cell updates depending on neighboring cells) and most ISLs are memory bound (i.e. the ratio of memory accesses to calculations is high).[7] Virtually all current parallel architectures have been explored for executing ISLs efficiently;[8] at the moment GPGPUs have proven to be most efficient.[9]

Libraries

Due to both the importance of ISLs to computer simulations and their high computational requirements, there are a number of efforts which aim at creating reusable libraries to support scientists in performing stencil-based computations. The libraries are mostly concerned with the parallelization, but may also tackle other challenges, such as IO, steering and checkpointing. They may be classified by their API.

Patch-based libraries

This is a traditional design. The library manages a set of n-dimensional scalar arrays, which the user program may access to perform updates. The library handles the synchronization of the boundaries (dubbed ghost zone or halo). The advantage of this interface is that the user program may loop over the arrays, which makes it easy to integrate legacy code [10] . The disadvantage is that the library can not handle cache blocking (as this has to be done within the loops[11])or wrapping of the API-calls for accelerators (e.g. via CUDA or OpenCL). Implementations include Cactus, a physics problem solving environment, and waLBerla.

Cell-based libraries

These libraries move the interface to updating single simulation cells: only the current cell and its neighbors are exposed, e.g. via getter/setter methods. The advantage of this approach is that the library can control tightly which cells are updated in which order, which is useful not only to implement cache blocking, but also to run the same code on multi-cores and GPUs.[12] This approach requires the user to recompile the source code together with the library. Otherwise a function call for every cell update would be required, which would seriously impair performance. This is only feasible with techniques such as class templates or metaprogramming, which is also the reason why this design is only found in newer libraries. Examples are Physis and LibGeoDecomp.

See also

External links

Notes and References

  1. Roth, Gerald et al. (1997) Proceedings of SC'97: High Performance Networking and Computing. Compiling Stencils in High Performance Fortran.
  2. Sloot, Peter M.A. et al. (May 28, 2002) Computational Science – ICCS 2002: International Conference, Amsterdam, the Netherlands, April 21–24, 2002. Proceedings, Part I. Page 843. Publisher: Springer. .
  3. Fey, Dietmar et al. (2010) Grid-Computing: Eine Basistechnologie für Computational Science. Page 439. Publisher: Springer.
  4. Yang, Laurence T.; Guo, Minyi. (August 12, 2005) High-Performance Computing : Paradigm and Infrastructure. Page 221. Publisher: Wiley-Interscience.
  5. Micikevicius, Paulius et al. (2009) 3D finite difference computation on GPUs using CUDA Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units
  6. Datta, Kaushik (2009) Auto-tuning Stencil Codes for Cache-Based Multicore Platforms , Ph.D. Thesis
  7. Wellein, G et al. (2009) Efficient temporal blocking for stencil computations by multicore-aware wavefront parallelization, 33rd Annual IEEE International Computer Software and Applications Conference, COMPSAC 2009
  8. Datta, Kaushik et al. (2008) Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures, SC '08 Proceedings of the 2008 ACM/IEEE conference on Supercomputing
  9. Schäfer, Andreas and Fey, Dietmar (2011) High Performance Stencil Code Algorithms for GPGPUs, Proceedings of the International Conference on Computational Science, ICCS 2011
  10. S. Donath, J. Götz, C. Feichtinger, K. Iglberger and U. Rüde (2010) waLBerla: Optimization for Itanium-based Systems with Thousands of Processors, High Performance Computing in Science and Engineering, Garching/Munich 2009
  11. Nguyen, Anthony et al. (2010) 3.5-D Blocking Optimization for Stencil Computations on Modern CPUs and GPUs, SC '10 Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis
  12. Naoya Maruyama, Tatsuo Nomura, Kento Sato, and Satoshi Matsuoka (2011) Physis: An Implicitly Parallel Programming Model for Stencil Computations on Large-Scale GPU-Accelerated Supercomputers, SC '11 Proceedings of the 2011 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis