Malliavin calculus explained

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.

Malliavin calculus is named after Paul Malliavin whose ideas led to a proof that Hörmander's condition implies the existence and smoothness of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. The calculus has been applied to stochastic partial differential equations as well.

The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications in, for example, stochastic filtering.

Overview and history

Malliavin introduced Malliavin calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations.

Invariance principle

The usual invariance principle for Lebesgue integration over the whole real line is that, for any real number ε and integrable function f, thefollowing holds

infty
\int
-infty

f(x)dλ(x)=

infty
\int
-infty

f(x+\varepsilon)dλ(x)

and hence
infty
\int
-infty

f'(x)dλ(x)=0.

This can be used to derive the integration by parts formula since, setting f = gh, it implies

0=

infty
\int
-infty

f'dλ=

infty
\int
-infty

(gh)'dλ=

infty
\int
-infty

gh'dλ

infty
+ \int
-infty

g'hdλ.

A similar idea can be applied in stochastic analysis for the differentiation along a Cameron-Martin-Girsanov direction. Indeed, let

hs

be a square-integrable predictable process and set

\varphi(t)=

t
\int
0

hsds.

If

X

is a Wiener process, the Girsanov theorem then yields the following analogue of the invariance principle:

E(F(X+\varepsilon\varphi))=E\left[F(X)\exp\left(

1
\varepsilon\int
0

hsdXs-

1
2

\varepsilon2

1
\int
0
2
h
s

ds\right)\right].

Differentiating with respect to ε on both sides and evaluating at ε=0, one obtains the following integration by parts formula:

E(\langleDF(X),\varphi\rangle)=El[F(X)

1
\int
0

hsdXsr].

Here, the left-hand side is the Malliavin derivative of the random variable

F

in the direction

\varphi

and the integral appearing on the right hand side should be interpreted as an Itô integral.

Gaussian probability space

X=(\Omega,l{F},P,l{H})

. This is a (complete) probability space

(\Omega,l{F},P)

together with a closed subspace

l{H}\subsetL2(\Omega,l{F},P)

such that all

H\inl{H}

are mean zero Gaussian variables and

l{F}=\sigma(H:H\inl{H})

. If one chooses a basis for

l{H}

then one calls

X

a numerical model. On the other hand, for any separable Hilbert space

l{G}

exists a canonical irreducible Gaussian probability space

\operatorname{Seg}(l{G})

named the Segal model having

l{G}

as its Gaussian subspace. Properties of a Gaussian probability space that do not depend on the particular choice of basis are called intrinsic and such that do depend on the choice extrensic.[1] We denote the countably infinite product of real spaces as

\R\N

infty
=\prod\limits
i=1

\R

.

Let

\gamma

be the canonical Gaussian measure, by transferring the Cameron-Martin theorem from

(\R\N,l{B}(\R\N),\gamma\N= ⊗ n\in\N\gamma)

into a numerical model

X

, the additive group of

l{H}

will define a quasi-automorphism group on

\Omega

. A construction can be done as follows: choose an orthonormal basis in

l{H}

, let

\tau\alpha(x)=x+\alpha

denote the translation on

\R\N

by

\alpha

, denote the map into the Cameron-Martin space by

j:l{H}\to\ell2

, denote

Linfty-0(\Omega,l{F},P)=cap\limitsp<inftyLp(\Omega,l{F},P)  

and

q:Linfty-0(\R\N,l{B}(\R\N),\gamma\N)\toLinfty-0(\Omega,l{F},P),

we get a canonical representation of the additive group

\rho:l{H}\to\operatorname{End}(Linfty-0(\Omega,l{F},P))

acting on the endomorphisms by defining

\rho(h)=q\circ\tauj(h)\circq-1.

One can show that the action of

\rho

is extrinsic meaning it does not depend on the choice of basis for

l{H}

, further

\rho(h+h')=\rho(h)\rho(h')

for

h,h'\inl{H}

and for the infinitesimal generator of

(\rho(h))h

that

\lim\limits\varepsilon

\rho(\varepsilonh)-I
\varepsilon

=Mh

where

I

is the identity operator and

Mh

denotes the multiplication operator by the random variable on

\Omega

associated to

h\inl{H}

(acting on the endomorphisms).[2]

Clark–Ocone formula

See main article: Clark–Ocone theorem.

One of the most useful results from Malliavin calculus is the Clark–Ocone theorem, which allows the process in the martingale representation theorem to be identified explicitly. A simplified version of this theorem is as follows:

Consider the standard Wiener measure on the canonical space

C[0,1]

, equipped with its canonical filtration. For

F:C[0,1]\to\R

satisfying

E(F(X)2)<infty

which is Lipschitz and such that F has a strong derivative kernel, in the sense thatfor

\varphi

in C[0,1]

\lim\varepsilon

1
\varepsilon

(F(X+\varepsilon\varphi)-F(X))=

1
\int
0

F'(X,dt)\varphi(t)a.e.X

then

F(X)=E(F(X))+

1
\int
0

HtdXt,

where H is the previsible projection of F'(x, (t,1]) which may be viewed as the derivative of the function F with respect to a suitable parallel shift of the process X over the portion (t,1] of its domain.

This may be more concisely expressed by

F(X)=

1
E(F(X))+\int
0

E(DtF\midl{F}t)dXt.

Much of the work in the formal development of the Malliavin calculus involves extending this result to the largest possible class of functionals F by replacing the derivative kernel used above by the "Malliavin derivative" denoted

Dt

in the above statement of the result.

Skorokhod integral

See main article: Skorokhod integral. The Skorokhod integral operator which is conventionally denoted δ is defined as the adjoint of the Malliavin derivative in the white noise case when the Hilbert space is an

L2

space, thus for u in the domain of the operator which is a subset of

L2([0,infty) x \Omega)

,for F in the domain of the Malliavin derivative, we require

E(\langleDF,u\rangle)=E(F\delta(u)),

where the inner product is that on

L2[0,infty)

viz

\langlef,g\rangle=

infty
\int
0

f(s)g(s)ds.

The existence of this adjoint follows from the Riesz representation theorem for linear operators on Hilbert spaces.

It can be shown that if u is adapted then

\delta(u)=

infty
\int
0

utdWt,

where the integral is to be understood in the Itô sense. Thus this provides a method of extending the Itô integral to non adapted integrands.

Applications

The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering.

References

External links

Notes and References

  1. Book: Paul . Malliavin. Springer. Stochastic Analysis. Grundlehren der mathematischen Wissenschaften. Berlin, Heidelberg. 1997. 3-540-57024-1. 4–15.
  2. Book: Paul . Malliavin. Springer. Stochastic Analysis. Grundlehren der mathematischen Wissenschaften. Berlin, Heidelberg. 1997. 3-540-57024-1. 20–22.