In mathematics, quasilinearization is a technique which replaces a nonlinear differential equation or operator equation (or system of such equations) with a sequence of linear problems, which are presumed to be easier, and whose solutions approximate the solution of the original nonlinear problem with increasing accuracy. It is a generalization of Newton's method; the word "quasilinearization" is commonly used when the differential equation is a boundary value problem.[1] [2]
Quasilinearization replaces a given nonlinear operator with a certain linear operator which, being simpler, can be used in an iterative fashion to approximately solve equations containing the original nonlinear operator. This is typically performed when trying to solve an equation such as together with certain boundary conditions for which the equation has a solution . This solution is sometimes called the "reference solution". For quasilinearization to work, the reference solution needs to exist uniquely (at least locally). The process starts with an initial approximation that satisfies the boundary conditions and is "sufficiently close" to the reference solution in a sense to be defined more precisely later. The first step is to take the Fréchet derivative of the nonlinear operator at that initial approximation, in order to find the linear operator which best approximates locally. The nonlinear equation may then be approximated as, taking . Setting this equation to zero and imposing zero boundary conditions and ignoring higher-order terms gives the linear equation . The solution of this linear equation (with zero boundary conditions) might be called . Computation of for ... by solving these linear equations in sequence is analogous to Newton's iteration for a single equation, and requires recomputation of the Fréchet derivative at each . The process can converge quadratically to the reference solution, under the right conditions. Just as with Newton's method for nonlinear algebraic equations, however, difficulties may arise: for instance, the original nonlinear equation may have no solution, or more than one solution, or a multiple solution, in which cases the iteration may converge only very slowly, may not converge at all, or may converge instead to the wrong solution.
The practical test of the meaning of the phrase "sufficiently close" earlier is precisely that the iteration converges to the correct solution. Just as in the case of Newton iteration, there are theorems stating conditions under which one can know ahead of time when the initial approximation is "sufficiently close".
One could instead discretize the original nonlinear operator and generate a (typically large) set of nonlinear algebraic equations for the unknowns, and then use Newton's method proper on this system of equations. Generally speaking, the convergence behavior is similar: a similarly good initial approximation will produce similarly good approximate discrete solutions. However, the quasilinearization approach (linearizing the operator equation instead of the discretized equations) seems to be simpler to think about, and has allowed such techniques as adaptive spatial meshes to be used as the iteration proceeds.[3]
As an example to illustrate the process of quasilinearization, we can approximately solve the two-point boundary value problem for the nonlinear nodewhere the boundary conditions are
y(-1)=1
y(1)=1
y(x)=6\wp(x-\alpha|0,\beta)
g2=0
g3=\beta
\alpha
\beta
\alpha
\beta
6\wp(-1-\alpha|0,\beta)=1
6\wp(1-\alpha|0,\beta)=1
Applying the technique of quasilinearization instead, one finds by taking the Fréchet derivative at an unknown approximation
yk(x)
y0(x)=1
-1\lex\le1
n=21
xk=\cos(\pi(n-1-k)/(n-1))
k=0,1, … ,n-1
5 ⋅ 10-9
y3(x)
|v(x)|
-1\lex\le1
u1
6 ⋅ \wp(x-\alpha|0,\beta)
Other values of
\alpha
\beta
u2
y(x)
x=\alpha
y0=5x2-4
u2
u2