Iterative Stencil Loops (ISLs) or Stencil computations are a class of numerical data processing solution[1] which update array elements according to some fixed pattern, called a stencil.[2] They are most commonly found in computer simulations, e.g. for computational fluid dynamics in the context of scientific and engineering applications. Other notable examples include solving partial differential equations,[1] the Jacobi kernel, the Gauss–Seidel method,[2] image processing[1] and cellular automata.[3] The regular structure of the arrays sets stencil techniques apart from other modeling methods such as the Finite element method. Most finite difference codes which operate on regular grids can be formulated as ISLs.
ISLs perform a sequence of sweeps (called timesteps) through a given array.[2] Generally this is a 2- or 3-dimensional regular grid.[3] The elements of the arrays are often referred to as cells. In each timestep, all array elements are updated.[2] Using neighboring array elements in a fixed pattern (the stencil), each cell's new value is computed. In most cases boundary values are left unchanged, but in some cases (e.g. LBM codes) those need to be adjusted during the computation as well. Since the stencil is the same for each element, the pattern of data accesses is repeated.[4]
(I,S,S0,s,T)
I=
k | |
\prod | |
i=1 |
[0,\ldots,ni]
S
S0\colon\Zk\toS
s\in
l | |
\prod | |
i=1 |
\Zk
l
T\colonSl\toS
Since I is a k-dimensional integer interval, the array will always have the topology of a finite regular grid. The array is also called simulation space and individual cells are identified by their index
c\inI
l
c
Ic
Ic=\{j\mid\existsx\ins:j=c+x\}
Their states are given by mapping the tuple
Ic
Ni(c)
Ni\colonI\toSl
Ni(c)=(s1,\ldots,sl)withsj=Si(Ic(j))
This is all we need to define the system's state for the following time steps
Si+1\colon\Zk\toS
i\in\N
Si+1(c)=\begin{cases}T(Ni(c)),&c\inI\\ Si(c),&c\in\Zk\setminusI\end{cases}
Note that
Si
\Zk
I
Ic
Ic=\{j\mid\existsx\ins:j=((c+x)\mod(n1,\ldots,nk))\}
This may be useful for implementing periodic boundary conditions, which simplifies certain physical models.
To illustrate the formal definition, we'll have a look at how a two dimensional Jacobi iteration can be defined. The update function computes the arithmetic mean of a cell's four neighbors. In this case we set off with an initial solution of 0. The left and right boundary are fixed at 1, while the upper and lower boundaries are set to 0. After a sufficient number of iterations, the system converges against a saddle-shape.
\begin{align} I&=[0,\ldots,99]2\\ S&=\R\\ S0&:\Z2\to\R\\ S0((x,y))&=\begin{cases} 1,&x<0\\ 0,&0\lex<100\\ 1,&x\ge100 \end{cases}\\ s&=((0,-1),(-1,0),(1,0),(0,1))\\ T&\colon\R4\to\R\\ T((x1,x2,x3,x4))&=0.25 ⋅ (x1+x2+x3+x4) \end{align}
The shape of the neighborhood used during the updates depends on the application itself. The most common stencils are the 2D or 3D versions of the von Neumann neighborhood and Moore neighborhood. The example above uses a 2D von Neumann stencil while LBM codes generally use its 3D variant. Conway's Game of Life uses the 2D Moore neighborhood. That said, other stencils such as a 25-point stencil for seismic wave propagation[5] can be found, too.
Many simulation codes may be formulated naturally as ISLs. Since computing time and memory consumption grow linearly with the number of array elements, parallel implementations of ISLs are of paramount importance to research.[6] This is challenging since the computations are tightly coupled (because of the cell updates depending on neighboring cells) and most ISLs are memory bound (i.e. the ratio of memory accesses to calculations is high).[7] Virtually all current parallel architectures have been explored for executing ISLs efficiently;[8] at the moment GPGPUs have proven to be most efficient.[9]
Due to both the importance of ISLs to computer simulations and their high computational requirements, there are a number of efforts which aim at creating reusable libraries to support scientists in performing stencil-based computations. The libraries are mostly concerned with the parallelization, but may also tackle other challenges, such as IO, steering and checkpointing. They may be classified by their API.
This is a traditional design. The library manages a set of n-dimensional scalar arrays, which the user program may access to perform updates. The library handles the synchronization of the boundaries (dubbed ghost zone or halo). The advantage of this interface is that the user program may loop over the arrays, which makes it easy to integrate legacy code [10] . The disadvantage is that the library can not handle cache blocking (as this has to be done within the loops[11])or wrapping of the API-calls for accelerators (e.g. via CUDA or OpenCL). Implementations include Cactus, a physics problem solving environment, and waLBerla.
These libraries move the interface to updating single simulation cells: only the current cell and its neighbors are exposed, e.g. via getter/setter methods. The advantage of this approach is that the library can control tightly which cells are updated in which order, which is useful not only to implement cache blocking, but also to run the same code on multi-cores and GPUs.[12] This approach requires the user to recompile the source code together with the library. Otherwise a function call for every cell update would be required, which would seriously impair performance. This is only feasible with techniques such as class templates or metaprogramming, which is also the reason why this design is only found in newer libraries. Examples are Physis and LibGeoDecomp.