Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables. An example of a situation where the Barnes scheme is important is in weather forecasting[1] [2] where measurements are made wherever monitoring stations may be located, the positions of which are constrained by topography. Such interpolation is essential in data visualisation, e.g. in the construction of contour plots or other representations of analytic surfaces.
Barnes proposed an objective scheme for the interpolation of two dimensional data using a multi-pass scheme.[3] [4] This provided a method to interpolating sea-level pressures across the entire United States of America, producing a synoptic chart across the country using dispersed monitoring stations. Researchers have subsequently improved the Barnes method to reduce the number of parameters required for calculation of the interpolated result, increasing the objectivity of the method.
The method constructs a grid of size determined by the distribution of the two dimensional data points. Using this grid, the function values are calculated at each grid point. To do this the method utilises a series of Gaussian functions, given a distance weighting in order to determine the relative importance of any given measurement on the determination of the function values. Correction passes are then made to optimise the function values, by accounting for the spectral response of the interpolated points.
Here we describe the method of interpolation used in a multi-pass Barnes interpolation.
For a given grid point i, j the interpolated function g(xi, yi) is first approximated by the inverse weighting of the data points. To do this as weighting values is assigned to each Gaussian for each grid point, such that
wij=\exp\left(-
| |||||||
\kappa |
\right),
where
\kappa
\kappa=5.052\left(
2\Deltan | |
\pi |
\right)2.
The initial interpolation for the function from the measured values
fk(x,y)
g0(xi,yj)=
\displaystyle\sumkwijfk(x,y) | |
\displaystyle\sumkwij |
.
The correction for the next pass then utilises the difference between the observed field and the interpolated values at the measurement points to optimise the result:[1]
g1(xi,yj)=g0(xi,yj)+\sum(f(x,y)-g0(x,y))\exp\left(-
| |||||||
\gamma\kappa |
\right).
It is worth to note that successive correction steps can be used in order to achieve better agreement between the interpolated function and the measured values at the experimental points.
Although described as an objective method, there are many parameters which control the interpolated field. The choice of Δn, grid spacing Δx and
\gamma
The data spacing used in the analysis, Δn may be chosen either by calculating the true experimental data inter-point spacing, or by the use of a complete spatial randomness assumption, depending upon the degree of clustering in the observed data. The smoothing parameter
\gamma