Class: | Differential equation |
In numerical analysis, a multigrid method (MG method) is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid.[1] MG methods can be used as solvers as well as preconditioners.
The main idea of multigrid is to accelerate the convergence of a basic iterative method (known as relaxation, which generally reduces short-wavelength error) by a global correction of the fine grid solution approximation from time to time, accomplished by solving a coarse problem. The coarse problem, while cheaper to solve, is similar to the fine grid problem in that it also has short- and long-wavelength errors. It can also be solved by a combination of relaxation and appeal to still coarser grids. This recursive process is repeated until a grid is reached where the cost of direct solution there is negligible compared to the cost of one relaxation sweep on the fine grid. This multigrid cycle typically reduces all error components by a fixed amount bounded well below one, independent of the fine grid mesh size. The typical application for multigrid is in the numerical solution of elliptic partial differential equations in two or more dimensions.[2]
Multigrid methods can be applied in combination with any of the common discretization techniques. For example, the finite element method may be recast as a multigrid method.[3] In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions and boundary conditions. They do not depend on the separability of the equations or other special properties of the equation. They have also been widely used for more-complicated non-symmetric and nonlinear systems of equations, like the Lamé equations of elasticity or the Navier-Stokes equations.[4]
There are many variations of multigrid algorithms, but the common features are that a hierarchy of discretizations (grids) is considered. The important steps are:[5] [6]
There are many choices of multigrid methods with varying trade-offs between speed of solving a single iteration and the rate of convergence with said iteration. The 3 main types are V-Cycle, F-Cycle, and W-Cycle. These differ in which and how many coarse-grain cycles are performed per fine iteration. The V-Cycle algorithm executes one coarse-grain V-Cycle. F-Cycle does a coarse-grain V-Cycle followed by a coarse-grain F-Cycle, while each W-Cycle performs two coarse-grain W-Cycles per iteration. For a discrete 2D problem, F-Cycle takes 83% more time to compute than a V-Cycle iteration while a W-Cycle iteration takes 125% more. If the problem is set up in a 3D domain, then a F-Cycle iteration and a W-Cycle iteration take about 64% and 75% more time respectively than a V-Cycle iteration ignoring overheads. Typically, W-Cycle produces similar convergence to F-Cycle. However, in cases of convection-diffusion problems with high Péclet numbers, W-Cycle can show superiority in its rate of convergence per iteration over F-Cycle. The choice of smoothing operators are extremely diverse as they include Krylov subspace methods and can be preconditioned.
Any geometric multigrid cycle iteration is performed on a hierarchy of grids and hence it can be coded using recursion. Since the function calls itself with smaller sized (coarser) parameters, the coarsest grid is where the recursion stops. In cases where the system has a high condition number, the correction procedure is modified such that only a fraction of the prolongated coarser grid solution is added onto the finer grid.
These steps can be used as shown in the MATLAB style pseudo code for 1 iteration of V-Cycle Multigrid: % Pre-Smoothing phi = smoothing(phi,f,h); % Compute Residual Errors r = residual(phi,f,h); % Restriction rhs = restriction(r); eps = zeros(size(rhs)); % stop recursion at smallest grid size, otherwise continue recursion if smallest_grid_size_is_achieved eps = coarse_level_solve(eps,rhs,2*h); else eps = V_Cycle(eps,rhs,2*h); end % Prolongation and Correction phi = phi + prolongation(eps); % Post-Smoothing phi = smoothing(phi,f,h);end | The following represents F-cycle multigrid. This multigrid cycle is slower than V-Cycle per iteration but does result in faster convergence. % Pre-smoothing phi = smoothing(phi,f,h); % Compute Residual Errors r = residual(phi,f,h); % Restriction rhs = restriction(r); eps = zeros(size(rhs)); % stop recursion at smallest grid size, otherwise continue recursion if smallest_grid_size_is_achieved eps = coarse_level_solve(eps,rhs,2*h); else eps = F_Cycle(eps,rhs,2*h); end % Prolongation and Correction phi = phi + prolongation(eps); % Re-smoothing phi = smoothing(phi,f,h); % Compute residual errors r = residual(phi,f,h); % Restriction rhs = restriction(r); % stop recursion at smallest grid size, otherwise continue recursion if smallest_grid_size_is_achieved eps = coarse_level_solve(eps,rhs,2*h); else eps = V_Cycle(eps,rhs,2*h); end % Prolongation and Correction phi = phi + prolongation(eps); % Post-smoothing phi = smoothing(phi,f,h);end | Similarly the procedures can modified as shown in the MATLAB style pseudo code for 1 iteration of W-cycle multigrid for an even superior rate of convergence in certain cases: % Pre-smoothing phi = smoothing(phi,f,h); % Compute Residual Errors r = residual(phi,f,h); % Restriction rhs = restriction(r); eps = zeros(size(rhs)); % stop recursion at smallest grid size, otherwise continue recursion if smallest_grid_size_is_achieved eps = coarse_level_solve(eps,rhs,2*h); else eps = W_cycle(eps,rhs,2*h); end % Prolongation and correction phi = phi + prolongation(eps); % Re-smoothing phi = smoothing(phi,f,h); % Compute residual errors r = residual(phi,f,h); % Restriction rhs = restriction(r); % stop recursion at smallest grid size, otherwise continue recursion if smallest_grid_size_is_achieved eps = coarse_level_solve(eps,rhs,2*h); else eps = W_cycle(eps,rhs,2*h); end % Prolongation and correction phi = phi + prolongation(eps); % Post-smoothing phi = smoothing(phi,f,h);end |
This approach has the advantage over other methods that it often scales linearly with the number of discrete nodes used. In other words, it can solve these problems to a given accuracy in a number of operations that is proportional to the number of unknowns.
Assume that one has a differential equation which can be solved approximately (with a given accuracy) on a grid
i
Ni
Ni
Wi=\rhoKNi
i+1
\rho=Ni+1/Ni<1
K
The following recurrence relation is then obtained for the effort of obtaining the solution on grid
k
Wk=Wk+1+\rhoKNk
N1
W1=W2+\rhoKN1
Nk=\rhok-1N1
W1=KN1
n | |
\sum | |
p=0 |
\rhop
Using the geometric series, we then find (for finite
n
W1<KN1
1 | |
1-\rho |
that is, a solution may be obtained in
O(N)
O(N)
O(Nlog(N))
A multigrid method with an intentionally reduced tolerance can be used as an efficient preconditioner for an external iterative solver, e.g.,[7] The solution may still be obtained in
O(N)
If the matrix of the original equation or an eigenvalue problem is symmetric positive definite (SPD), the preconditioner is commonly constructed to be SPD as well, so that the standard conjugate gradient (CG) iterative methods can still be used. Such imposed SPD constraints may complicate the construction of the preconditioner, e.g., requiring coordinated pre- and post-smoothing. However, preconditioned steepest descent and flexible CG methods for SPD linear systems and LOBPCG for symmetric eigenvalue problems are all shown[8] to be robust if the preconditioner is not SPD.
Originally described in Xu's Ph.D. thesis[9] and later published in Bramble-Pasciak-Xu,[10] the BPX-preconditioner is one of the two major multigridapproaches (the other is the classic multigrid algorithm such as V-cycle) for solving large-scale algebraic systems that arise from the discretization of models in science and engineering described by partial differential equations. In view of the subspace correction framework,[11] BPX preconditioner is a parallel subspace correction method where as the classic V-cycle is a successive subspace correction method. The BPX-preconditioner is known to be naturally more parallel and in some applications more robust than the classic V-cycle multigrid method. The method has been widely used by researchers and practitioners since 1990.
Multigrid methods can be generalized in many different ways. They can be applied naturally in a time-stepping solution of parabolic partial differential equations, or they can be applied directly to time-dependent partial differential equations.[12] Research on multilevel techniques for hyperbolic partial differential equations is underway.[13] Multigrid methods can also be applied to integral equations, or for problems in statistical physics.[14]
Another set of multiresolution methods is based upon wavelets. These wavelet methods can be combined with multigrid methods.[15] [16] For example, one use of wavelets is to reformulate the finite element approach in terms of a multilevel method.[17]
Adaptive multigrid exhibits adaptive mesh refinement, that is, it adjusts the grid as the computation proceeds, in a manner dependent upon the computation itself.[18] The idea is to increase resolution of the grid only in regions of the solution where it is needed.
Practically important extensions of multigrid methods include techniques where no partial differential equation nor geometrical problem background is used to construct the multilevel hierarchy.[19] Such algebraic multigrid methods (AMG) construct their hierarchy of operators directly from the system matrix. In classical AMG, the levels of the hierarchy are simply subsets of unknowns without any geometric interpretation. (More generally, coarse grid unknowns can be particular linear combinations of fine grid unknowns.) Thus, AMG methods become black-box solvers for certain classes of sparse matrices. AMG is regarded as advantageous mainly where geometric multigrid is too difficult to apply,[20] but is often used simply because it avoids the coding necessary for a true multigrid implementation. While classical AMG was developed first, a related algebraic method is known as smoothed aggregation (SA).
In an overview paper[21] by Jinchao Xu and Ludmil Zikatanov, the "algebraic multigrid" methods are understood from an abstract point of view. They developed a unified framework and existing algebraic multigrid methods can be derived coherently. Abstract theory about how to construct optimal coarse space as well as quasi-optimal spaces was derived. Also, they proved that, under appropriate assumptions, the abstract two-level AMG method converges uniformly with respect to the size of the linear system, the coefficient variation, and the anisotropy. Their abstract framework covers most existing AMG methods, such as classical AMG, energy-minimization AMG, unsmoothed and smoothed aggregation AMG, and spectral AMGe.
Multigrid methods have also been adopted for the solution of initial value problems.[22] Of particular interest here are parallel-in-time multigrid methods:[23] in contrast to classical Runge–Kutta or linear multistep methods, they can offer concurrency in temporal direction.The well known Parareal parallel-in-time integration method can also be reformulated as a two-level multigrid in time.
Nearly singular problems arise in a number of important physical and engineering applications. Simple, but important example of nearly singular problems can be found at the displacement formulation of linear elasticity for nearly incompressible materials. Typically, the major problem to solve such nearly singular systems boils down to treat the nearly singular operator given by
A+\varepsilonM
\varepsilon
A
M