Gradient Explained

f

of several variables is the vector field (or vector-valued function)

\nablaf

whose value at a point

p

gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of

f

. If the gradient of a function is non-zero at a point

p

, the direction of the gradient is the direction in which the function increases most quickly from

p

, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function

f(r)

may be defined by:

df=\nabla f \cdot d\mathbf

where

df

is the total infinitesimal change in

f

for an infinitesimal displacement

dr

, and is seen to be maximal when

dr

is in the direction of the gradient

\nablaf

. The nabla symbol

\nabla

, written as an upside-down triangle and pronounced "del", denotes the vector differential operator.

When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of

f

at

p

. That is, for

f\colon\Rn\to\R

, its gradient

\nablaf\colon\Rn\to\Rn

is defined at the point

p=(x1,\ldots,xn)

in n-dimensional space as the vector

\nabla f(p) = \begin \frac(p) \\ \vdots \\ \frac(p)\end.

Note that the above definition for gradient is only defined for the function

f

, if it is differentiable at

p

. There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the Metric tensor at that point needs to be taken into account.

For example, the function

f(x,y)=x2y
x2+y2
unless at origin where

f(0,0)=0

, is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin.[1] In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase.

df

: the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of

f

at a point

p

with another tangent vector

v

equals the directional derivative of

f

at

p

of the function along

v

; that is, \nabla f(p) \cdot \mathbf v = \frac(p) = df_(\mathbf) . The gradient admits multiple generalizations to more general functions on manifolds; see .

Motivation

Consider a room where the temperature is given by a scalar field,, so at each point the temperature is, independent of time. At each point in the room, the gradient of at that point will show the direction in which the temperature rises most quickly, moving away from . The magnitude of the gradient will determine how fast the temperature rises in that direction.

Consider a surface whose height above sea level at point is . The gradient of at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.

The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, as the dot product measures how much the unit vector along the road aligns with the steepest slope, which is 40% times the cosine of 60°, or 20%.

More generally, if the hill height function is differentiable, then the gradient of dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of along the unit vector.

Notation

The gradient of a function

f

at point

a

is usually written as

\nablaf(a)

. It may also be denoted by any of the following:

\vec{\nabla}f(a)

: to emphasize the vector nature of the result.

\operatorname{grad}f

\partialif

and

fi

: Written with Einstein notation, where repeated indices are summed over.

Definition

The gradient (or gradient vector field) of a scalar function is denoted or where (nabla) denotes the vector differential operator, del. The notation is also commonly used to represent the gradient. The gradient of is defined as the unique vector field whose dot product with any vector at each point is the directional derivative of along . That is,

\big(\nabla f(x)\big)\cdot \mathbf = D_f(x)

where the right-hand side is the directional derivative and there are many ways to represent it. Formally, the derivative is dual to the gradient; see relationship with derivative.

When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient).

The magnitude and direction of the gradient vector are independent of the particular coordinate representation.

Cartesian coordinates

In the three-dimensional Cartesian coordinate system with a Euclidean metric, the gradient, if it exists, is given by

\nabla f = \frac \mathbf + \frac \mathbf + \frac \mathbf,

where,, are the standard unit vectors in the directions of the, and coordinates, respectively. For example, the gradient of the functionf(x,y,z)= 2x+3y^2-\sin(z)is\nabla f(x, y, z) = 2\mathbf+ 6y\mathbf -\cos(z)\mathbf.or \nabla f(x, y, z) = \begin 2 \\ 6y \\ -\cos z\end.

In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.

Cylindrical and spherical coordinates

See main article: Del in cylindrical and spherical coordinates.

In cylindrical coordinates with a Euclidean metric, the gradient is given by:[2]

\nabla f(\rho, \varphi, z) = \frac\mathbf_\rho + \frac\frac\mathbf_\varphi + \frac\mathbf_z,

where is the axial distance, is the azimuthal or azimuth angle, is the axial coordinate, and, and are unit vectors pointing along the coordinate directions.

In spherical coordinates, the gradient is given by:

\nabla f(r, \theta, \varphi) = \frac\mathbf_r + \frac\frac\mathbf_\theta + \frac\frac\mathbf_\varphi,

where is the radial distance, is the azimuthal angle and is the polar angle, and, and are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis).

For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).

General coordinates

We consider general coordinates, which we write as, where is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so refers to the second component—not the quantity squared. The index variable refers to an arbitrary element . Using Einstein notation, the gradient can then be written as:

\nabla f = \fracg^ \mathbf_j (Note that its dual is \mathrmf = \frac\mathbf^i),

where

ei=\partialx/\partialxi

and

ei=dxi

refer to the unnormalized local covariant and contravariant bases respectively,

gij

is the inverse metric tensor, and the Einstein summation convention implies summation over i and j.

If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as

\hat{e

}_i and

\hat{e

}^i, using the scale factors (also known as Lamé coefficients)

hi=\lVertei\rVert=\sqrt{gi

} = 1\, / \lVert \mathbf^i \rVert :

\nabla f = \fracg^ \hat_\sqrt = \sum_^n \, \frac \frac \mathbf_i (and \mathrmf = \sum_^n \, \frac \frac \mathbf^i),

where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices,

\hat{e

}_i,

\hat{e

}^i, and

hi

are neither contravariant nor covariant.

The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.

Relationship with derivative

Relationship with total derivative

The gradient is closely related to the total derivative (total differential)

df

: they are transpose (dual) to each other. Using the convention that vectors in

\Rn

are represented by column vectors, and that covectors (linear maps

\Rn\to\R

) are represented by row vectors, the gradient

\nablaf

and the derivative

df

are expressed as a column and row vector, respectively, with the same components, but transpose of each other:

\nabla f(p) = \begin\frac(p) \\ \vdots \\ \frac(p) \end ;df_p = \begin\frac(p) & \cdots & \frac(p) \end .

While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a linear form (or covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point,

\nablaf(p)\inTp\Rn

, while the derivative is a map from the tangent space to the real numbers,

dfp\colonTp\Rn\to\R

. The tangent spaces at each point of

\Rn

can be "naturally" identified with the vector space

\Rn

itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space

(\Rn)*

of covectors; thus the value of the gradient at a point can be thought of a vector in the original

\Rn

, not just as a tangent vector.

Computationally, given a tangent vector, the vector can be multiplied by the derivative (as matrices), which is equal to taking the dot product with the gradient:(df_p)(v) = \begin\frac(p) & \cdots & \frac(p) \end\beginv_1 \\ \vdots \\ v_n\end= \sum_^n \frac(p) v_i= \begin\frac(p) \\ \vdots \\ \frac(p) \end \cdot \beginv_1 \\ \vdots \\ v_n\end= \nabla f(p) \cdot v

Differential or (exterior) derivative

The best linear approximation to a differentiable functionf : \R^n \to \Rat a point

x

in

\Rn

is a linear map from

\Rn

to

\R

which is often denoted by

dfx

or

Df(x)

and called the differential or total derivative of

f

at

x

. The function

df

, which maps

x

to

dfx

, is called the total differential or exterior derivative of

f

and is an example of a differential 1-form.

Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector.

The gradient is related to the differential by the formula(\nabla f)_x\cdot v = df_x(v)for any

v\in\Rn

, where

is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector.

If

\Rn

is viewed as the space of (dimension

n

) column vectors (of real numbers), then one can regard

df

as the row vector with components\left(\frac, \dots, \frac\right),so that

dfx(v)

is given by matrix multiplication. Assuming the standard Euclidean metric on

\Rn

, the gradient is then the corresponding column vector, that is,(\nabla f)_i = df^\mathsf_i.

Linear approximation to a function

f

from the Euclidean space

\Rn

to

\R

at any particular point

x0

in

\Rn

characterizes the best linear approximation to

f

at

x0

. The approximation is as follows:

f(x) \approx f(x_0) + (\nabla f)_\cdot(x-x_0)

for

x

close to

x0

, where

(\nabla

f)
x0
is the gradient of

f

computed at

x0

, and the dot denotes the dot product on

\Rn

. This equation is equivalent to the first two terms in the multivariable Taylor series expansion of

f

at

x0

.

Relationship with

Let be an open set in . If the function is differentiable, then the differential of is the Fréchet derivative of . Thus is a function from to the space such that\lim_ \frac

= 0,where · is the dot product.

As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative:

Linearity
  • The gradient is linear in the sense that if and are two real-valued functions differentiable at the point, and and are two constants, then is differentiable at, and moreover \nabla\left(\alpha f+\beta g\right)(a) = \alpha \nabla f(a) + \beta\nabla g (a).
    Product rule
  • If and are real-valued functions differentiable at a point, then the product rule asserts that the product is differentiable at, and \nabla (fg)(a) = f(a)\nabla g(a) + g(a)\nabla f(a).
    Chain rule
  • Suppose that is a real-valued function defined on a subset of, and that is differentiable at a point . There are two forms of the chain rule applying to the gradient. First, suppose that the function is a parametric curve; that is, a function maps a subset into . If is differentiable at a point such that, then (f\circ g)'(c) = \nabla f(a)\cdot g'(c), where ∘ is the composition operator: .

    More generally, if instead, then the following holds:\nabla (f\circ g)(c) = \big(Dg(c)\big)^\mathsf \big(\nabla f(a)\big),where T denotes the transpose Jacobian matrix.

    For the second form of the chain rule, suppose that is a real valued function on a subset of, and that is differentiable at the point . Then\nabla (h\circ f)(a) = h'\big(f(a)\big)\nabla f(a).

    Further properties and applications

    Level sets

    A level surface, or isosurface, is the set of all points where some function has a given value.

    If is differentiable, then the dot product of the gradient at a point with a vector gives the directional derivative of at in the direction . It follows that in this case the gradient of is orthogonal to the level sets of . For example, a level surface in three-dimensional space is defined by an equation of the form . The gradient of is then normal to the surface.

    More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form such that is nowhere zero. The gradient of is then normal to the hypersurface.

    Similarly, an affine algebraic hypersurface may be defined by an equation, where is a polynomial. The gradient of is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.

    Conservative vector fields and the gradient theorem

    See main article: Gradient theorem.

    The gradient of a function is called a gradient field. A (continuous) gradient field is always a conservative vector field: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.

    Gradient is direction of steepest ascent

    The gradient of a function

    f\colon\Rn\to\R

    at point is also the direction of its steepest ascent, i.e. it maximizes its directional derivative:

    Let

    v\in\Rn

    be an arbitrary unit vector. With the directional derivative defined as

    \nabla_v f (x) = \lim_ \frac,

    we get, by substituting the function

    f(x+vh)

    with its Taylor series,

    \nabla_v f (x) = \lim_ \frac,

    where

    R

    denotes higher order terms in

    vh

    .

    Dividing by

    h

    , and taking the limit yields a term which is bounded from above by the Cauchy-Schwarz inequality[3]

    |\nabla_v f (x)| = |\nabla f \cdot v| \le |\nabla f| |v| = |\nabla f|.

    Choosing

    v*=\nablaf/|\nablaf|

    maximizes the directional derivative, and equals the upper bound

    |\nabla_ f (x)| = |(\nabla f)^2/|\nabla f|| = |\nabla f|.

    Generalizations

    Jacobian

    See main article: Jacobian matrix and determinant.

    The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. A further generalization for a function between Banach spaces is the Fréchet derivative.

    Suppose is a function such that each of its first-order partial derivatives exist on . Then the Jacobian matrix of is defined to be an matrix, denoted by

    Jf(x)

    or simply

    J

    . The th entry is \mathbf J_ = / . Explicitly\mathbf J = \begin \dfrac & \cdots & \dfrac \end= \begin \nabla^\mathsf f_1 \\ \vdots \\ \nabla^\mathsf f_m \end= \begin \dfrac & \cdots & \dfrac\\ \vdots & \ddots & \vdots\\ \dfrac & \cdots & \dfrac \end.

    Gradient of a vector field

    See also: Covariant derivative. Since the total derivative of a vector field is a linear mapping from vectors to vectors, it is a tensor quantity.

    In rectangular coordinates, the gradient of a vector field is defined by:

    \nabla \mathbf=g^\frac \mathbf_i \otimes \mathbf_k,

    (where the Einstein summation notation is used and the tensor product of the vectors and is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix:

    \frac = \frac.

    In curvilinear coordinates, or more generally on a curved manifold, the gradient involves Christoffel symbols:

    \nabla \mathbf=g^\left(\frac+_f^l\right) \mathbf_i \otimes \mathbf_k,

    where are the components of the inverse metric tensor and the are the coordinate basis vectors.

    Expressed more invariantly, the gradient of a vector field can be defined by the Levi-Civita connection and metric tensor:[4]

    \nabla^a f^b = g^ \nabla_c f^b,

    where is the connection.

    Riemannian manifolds

    For any smooth function on a Riemannian manifold, the gradient of is the vector field such that for any vector field,g(\nabla f, X) = \partial_X f,that is,g_x\big((\nabla f)_x, X_x \big) = (\partial_X f) (x),where denotes the inner product of tangent vectors at defined by the metric and is the function that takes any point to the directional derivative of in the direction, evaluated at . In other words, in a coordinate chart from an open subset of to an open subset of, is given by:\sum_^n X^ \big(\varphi(x)\big) \frac(f \circ \varphi^) \Bigg|_,where denotes the th component of in this coordinate chart.

    So, the local form of the gradient takes the form:

    \nabla f = g^ \frac _i .

    Generalizing the case, the gradient of a function is related to its exterior derivative, since(\partial_X f) (x) = (df)_x(X_x) .More precisely, the gradient is the vector field associated to the differential 1-form using the musical isomorphism\sharp=\sharp^g\colon T^*M\to TM(called "sharp") defined by the metric . The relation between the exterior derivative and the gradient of a function on is a special case of this in which the metric is the flat metric given by the dot product.

    References

    Further reading

    External links

    Notes and References

    1. Web site: Non-differentiable functions must have discontinuous partial derivatives - Math Insight . 2023-10-21 . mathinsight.org.
    2. .
    3. Book: T. Arens . Mathematik . 5th . Springer Spektrum Berlin . 2022 . 978-3-662-64388-4 .
    4. .