Exponentiation Explained

notation
Variant1 Caption:base b and exponent n

In mathematics, exponentiation is an operation involving two numbers: the base and the exponent or power. Exponentiation is written as, where is the base and is the power; this is pronounced as " (raised) to the (power of) ".[1] When is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, is the product of multiplying bases:[1] b^n = \underbrace_.

The exponent is usually shown as a superscript to the right of the base. In that case, is called "b raised to the nth power", "b (raised) to the power of n", "the nth power of b", "b to the nth power", or most briefly as "b to the n(th)".

Starting from the basic fact stated above that, for any positive integer

n

,

bn

is

n

occurrences of

b

all multiplied by each other, several other properties of exponentiation directly follow. In particular:[2]

\beginb^ & = \underbrace_ \\[1ex]& = \underbrace_ \times \underbrace_ \\[1ex]& = b^n \times b^m\end

In other words, when multiplying a base raised to one exponent by the same base raised to another exponent, the exponents add. From this basic rule that exponents add, we can derive that

b0

must be equal to 1 for any

b0

, as follows. For any

n

,

b0 x bn=b0+n=bn

. Dividing both sides by

bn

gives

b0=bn/bn=1

.

The fact that

b1=b

can similarly be derived from the same rule. For example,

(b1)3=b1 x b1 x b1=b1+1+1=b3

. Taking the cube root of both sides gives

b1=b

.

The rule that multiplying makes exponents add can also be used to derive the properties of negative integer exponents. Consider the question of what

b-1

should mean. In order to respect the "exponents add" rule, it must be the case that

b-1 x b1=b-1+1=b0=1

. Dividing both sides by

b1

gives

b-1=1/b1

, which can be more simply written as

b-1=1/b

, using the result from above that

b1=b

. By a similar argument,

b-n=1/bn

.

The properties of fractional exponents also follow from the same rule. For example, suppose we consider

\sqrt{b}

and ask if there is some suitable exponent, which we may call

r

, such that

br=\sqrt{b}

. From the definition of the square root, we have that

\sqrt{b} x \sqrt{b}=b

. Therefore, the exponent

r

must be such that

br x br=b

. Using the fact that multiplying makes exponents add gives

br+r=b

. The

b

on the right-hand side can also be written as

b1

, giving

br+r=b1

. Equating the exponents on both sides, we have

r+r=1

. Therefore,

r=\tfrac{1}{2}

, so

\sqrt{b}=b1/2

.

The definition of exponentiation can be extended to allow any real or complex exponent. Exponentiation by integer exponents can also be defined for a wide variety of algebraic structures, including matrices.

Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography.

Etymology

The term exponent originates from the Latin exponentem, the present participle of exponere, meaning "to put forth".[3] The term power (Latin: potentia, potestas, dignitas) is a mistranslation[4] [5] of the ancient Greek δύναμις (dúnamis, here: "amplification"[4]) used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios.[6]

History

Antiquity

The Sand Reckoner

See main article: The Sand Reckoner. In The Sand Reckoner, Archimedes proved the law of exponents,, necessary to manipulate powers of .[7] He then used powers of to estimate the number of grains of sand that can be contained in the universe.

Islamic Golden Age

Māl and kaʿbah ("square" and "cube")

In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال (māl, "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة (Kaʿbah, "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters mīm (m) and kāf (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi.

15th–18th century

Introducing exponents

Nicolas Chuquet used a form of exponential notation in the 15th century, for example to represent .[8] This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example for .[9]

"Exponent"; "square" and "cube"

The word exponent was coined in 1544 by Michael Stifel.[10] [11] In the 16th century, Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth).[12] Biquadrate has been used to refer to the fourth power as well.

Modern exponential notation

In 1636, James Hume used in essence modern notation, when in L'algèbre de Viète he wrote for .[13] Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled La Géométrie; there, the notation is introduced in Book I.[14]

Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as .

"Indices"

Samuel Jeake introduced the term indices in 1696. The term involution was used synonymously with the term indices, but had declined in usage[15] and should not be confused with its more common meaning.

Variable exponents, non-integer exponents

In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing:

Terminology

The expression is called "the square of " or " squared", because the area of a square with side-length is . (It is true that it could also be called " to the second power", but "the square of " and " squared" are so ingrained by tradition and convenience that " to the second power" tends to sound unusual or clumsy.)

Similarly, the expression is called "the cube of " or " cubed", because the volume of a cube with side-length is .

When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, . The base appears times in the multiplication, because the exponent is . Here, is the 5th power of 3, or 3 raised to the 5th power.

The word "raised" is usually omitted, and sometimes "power" as well, so can be simply read "3 to the 5th", or "3 to the 5". Therefore, the exponentiation can be expressed as "b to the power of n", "b to the nth power", "b to the nth", or most briefly as "b to the n".

Integer exponents

The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations.

Positive exponents

The definition of the exponentiation as an iterated multiplication can be formalized by using induction,[16] and this definition can be used as soon one has an associative multiplication:

The base case is

b1=b

and the recurrence is

bn+1=bnb.

The associativity of multiplication implies that for any positive integers and,

bm+n=bmbn,

and

(bm)n=bmn.

Zero exponent

As mentioned earlier, a (nonzero) number raised to the power is :[17] [1]

b0=1.

This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an identity. This way the formula

bm+n=bmbn

also holds for

n=0

.

The case of is controversial. In contexts where only integer powers are considered, the value is generally assigned to but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context.

Negative exponents

Exponentiation with negative exponents is defined by the following identity, which holds for any integer and nonzero :

b-n=

1
bn
.[1] Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity (

infty

).[18]

This definition of exponentiation with negative exponents is the only one that allows extending the identity

bm+n=bmbn

to negative exponents (consider the case

m=-n

).

The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted (for example, the square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element is standardly denoted

x-1.

Identities and properties

The following identities, often called , hold for all integer exponents, provided that the base is non-zero:[1]

\begin{align} bm&=bmbn\\ \left(bm\right)n&=bm\\ (bc)n&=bncn \end{align}

Unlike addition and multiplication, exponentiation is not commutative. For example, . Also unlike addition and multiplication, exponentiation is not associative. For example,, whereas . Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or right-associative), not bottom-up (or left-associative). That is,

pq
b

=

\left(pq\right)
b

,

which, in general, is different from

\left(bp\right)q=bp.

Powers of a sum

The powers of a sum can normally be computed from the powers of the summands by the binomial formula

n
(a+b)
i=0

\binom{n}{i}aibn-i

n
=\sum
i=0
n!
i!(n-i)!

aibn-i.

However, this formula is true only if the summands commute (i.e. that), which is implied if they belong to a structure that is commutative. Otherwise, if and are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes instead of) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation.

Combinatorial interpretation

For nonnegative integers and, the value of is the number of functions from a set of elements to a set of elements (see cardinal exponentiation). Such functions can be represented as -tuples from an -element set (or as -letter words from an -letter alphabet). Some examples for particular values of and are given in the following table:

The possible -tuples of elements from the set
0 = 0
1 = 1(1, 1, 1, 1)
2 = 8(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)
3 = 9(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)
4 = 4(1), (2), (3), (4)
5 = 1

Particular bases

Powers of ten

See also: Scientific notation.

See main article: Power of 10. In the base ten (decimal) number system, integer powers of are written as the digit followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, and .

Exponentiation with base is used in scientific notation to denote large or small numbers. For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as .

SI prefixes based on powers of are also used to describe small or large quantities. For example, the prefix kilo means, so a kilometre is .

Powers of two

See main article: Power of two. The first negative powers of are commonly used, and have special names, e.g.: half and quarter.

Powers of appear in set theory, since a set with members has a power set, the set of all of its subsets, which has members.

Integer powers of are important in computer science. The positive integer powers give the number of possible values for an -bit integer binary number; for example, a byte may take different values. The binary number system expresses any number as a sum of powers of, and denotes it as a sequence of and, separated by a binary point, where indicates a power of that appears in the sum; the exponent is determined by the place of this : the nonnegative exponents are the rank of the on the left of the point (starting from), and the negative exponents are determined by the rank on the right of the point.

Powers of one

Every power of one equals: . This is true even if is negative.

The first power of a number is the number itself: .

Powers of zero

If the exponent is positive, the th power of zero is zero: .

If the exponent is negative, the th power of zero is undefined, because it must equal

1/0-n

with, and this would be

1/0

according to above.

The expression is either defined as, or it is left undefined.

Powers of negative one

If is an even integer, then . This is because a negative number multiplied by another negative number cancels the sign, and thus gives a positive number.

If is an odd integer, then . This is because there will be a remaining after removing pairs.

Because of this, powers of are useful for expressing alternating sequences. For a similar discussion of powers of the complex number, see .

Large exponents

The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound:

as when

This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one".

Powers of a number with absolute value less than one tend to zero:

as when

Any power of one is always one:

for all if

Powers of alternate between and as alternates between even and odd, and thus do not tend to any limit as grows.

If, alternates between larger and larger positive and negative numbers as alternates between even and odd, and thus does not tend to any limit as grows.

If the exponentiated number varies while tending to as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is

as

See below.

Other limits, in particular those of expressions that take on an indeterminate form, are described in below.

Power functions

See main article: Power law.

Real functions of the form

f(x)=cxn

, where

c\ne0

, are sometimes called power functions.[19] When

n

is an integer and

n\ge1

, two primary families exist: for

n

even, and for

n

odd. In general for

c>0

, when

n

is even

f(x)=cxn

will tend towards positive infinity with increasing

x

, and also towards positive infinity with decreasing

x

. All graphs from the family of even power functions have the general shape of

y=cx2

, flattening more in the middle as

n

increases.[20] Functions with this kind of symmetry are called even functions.

When

n

is odd,

f(x)

's asymptotic behavior reverses from positive

x

to negative

x

. For

c>0

,

f(x)=cxn

will also tend towards positive infinity with increasing

x

, but towards negative infinity with decreasing

x

. All graphs from the family of odd power functions have the general shape of

y=cx3

, flattening more in the middle as

n

increases and losing all flatness there in the straight line for

n=1

. Functions with this kind of symmetry are called odd functions.

For

c<0

, the opposite asymptotic behavior is true in each case.[20]

Table of powers of decimal digits

n n2 n3 n4 n5 n6 n7 n8 n9 n10
1 1 1 1 1 1 1 1 1 1
2 4 8 16 32 64 128 256 512 1024
3 9 27 81 243 729
4 16 64 256 1024
5 25 125 625 3125
6 36 216 1296
7 49 343 2401
8 64 512 4096
9 81 729 6561
10 100 1000

Rational exponents

If is a nonnegative real number, and is a positive integer,

x1/n

or

\sqrt[n]x

denotes the unique positive real th root of, that is, the unique positive real number such that

yn=x.

If is a positive real number, and

pq
is a rational number, with and integers, then x^ is defined as
pq=
\left(x
x

p\right)

1q=(x
1q)
p.
The equality on the right may be derived by setting
1q,
y=x
and writing
1q)
p=y
(x

p=\left((yp)q\right)

1q=\left((y
q)

p\right)

1q=(x
p)
1q.

If is a positive rational number,, by definition.

All these definitions are required for extending the identity

(xr)s=xrs

to rational exponents.

On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a real th root, which is negative, if is odd, and no real root if is even. In the latter case, whichever complex th root one chooses for

1n,
x
the identity

(xa)b=xab

cannot be satisfied. For example,

\left((-1)2\right)

12
=
12=
1 ≠
1
2 ⋅ 12
(-1)

=(-1)1=-1.

See and for details on the way these problems may be handled.

Real exponents

For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (, below), or in terms of the logarithm of the base and the exponential function (, below). The result is always a positive real number, and the identities and properties shown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly to complex exponents.

On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called the principal value, but there is no choice of the principal value for which the identity

\left(br\right)s=br

is true; see . Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function.

Limits of rational exponents

Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number with an arbitrary real exponent can be defined by continuity with the rule[21]

bx=\limrbr(b\inR+,x\inR),

where the limit is taken over rational values of only. This limit exists for every positive and every real .

For example, if, the non-terminating decimal representation and the monotonicity of the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must contain

b\pi:

\left[b3,b4\right],\left[b3.1,b3.2\right],\left[b3.14,b3.15\right],\left[b3.141,b3.142\right],\left[b3.1415,b3.1416\right],\left[b3.14159,b3.14160\right],\ldots

So, the upper bounds and the lower bounds of the intervals form two sequences that have the same limit, denoted

b\pi.

This defines

bx

for every positive and real as a continuous function of and . See also Well-defined expression.[22]

Exponential function

See main article: Exponential function. The exponential function is often defined as

x\mapstoex,

where

e2.718

is Euler's number. To avoid circular reasoning, this definition cannot be used here. So, a definition of the exponential function, denoted

\exp(x),

and of Euler's number are given, which rely only on exponentiation with positive integer exponents. Then a proof is sketched that, if one uses the definition of exponentiation given in preceding sections, one has

\exp(x)=ex.

There are many equivalent ways to define the exponential function, one of them being

\exp(x)=\limn → infty\left(1+

x
n

\right)n.

One has

\exp(0)=1,

and the exponential identity

\exp(x+y)=\exp(x)\exp(y)

holds as well, since

\exp(x)\exp(y)=\limn → infty\left(1+

x
n

\right)n\left(1+

y
n

\right)n=\limn → infty\left(1+

x+y
n

+

xy
n2

\right)n,

and the second-order term
xy
n2
does not affect the limit, yielding

\exp(x)\exp(y)=\exp(x+y)

.

Euler's number can be defined as

e=\exp(1)

. It follows from the preceding equations that

\exp(x)=ex

when is an integer (this results from the repeated-multiplication definition of the exponentiation). If is real,

\exp(x)=ex

results from the definitions given in preceding sections, by using the exponential identity if is rational, and the continuity of the exponential function otherwise.

The limit that defines the exponential function converges for every complex value of, and therefore it can be used to extend the definition of

\exp(z)

, and thus

ez,

from the real numbers to any complex argument . This extended exponential function still satisfies the exponential identity, and is commonly used for defining exponentiation for complex base and exponent.

Powers via logarithms

The definition of as the exponential function allows defining for every positive real numbers, in terms of exponential and logarithm function. Specifically, the fact that the natural logarithm is the inverse of the exponential function means that one has

b=\exp(lnb)=eln

for every . For preserving the identity

(ex)y=exy,

one must have

bx=\left(eln\right)x=ex

So,

ex

can be used as an alternative definition of for any positive real . This agrees with the definition given above using rational exponents and continuity, with the advantage to extend straightforwardly to any complex exponent.

Complex exponents with a positive real base

If is a positive real number, exponentiation with base and complex exponent is defined by means of the exponential function with complex argument (see the end of , above) as

bz=e(zln,

where

lnb

denotes the natural logarithm of .

This satisfies the identity

bz+t=bzbt,

In general,\left(b^z\right)^t is not defined, since is not a real number. If a meaning is given to the exponentiation of a complex number (see , below), one has, in general,

\left(bz\right)t\nebzt,

unless is real or is an integer.

Euler's formula,

eiy=\cosy+i\siny,

allows expressing the polar form of

bz

in terms of the real and imaginary parts of, namely

bx+iy=bx(\cos(ylnb)+i\sin(ylnb)),

where the absolute value of the trigonometric factor is one. This results from

bx+iy=bxbiy=bxeiyln=bx(\cos(ylnb)+i\sin(ylnb)).

Non-integer powers of complex numbers

In the preceding sections, exponentiation with non-integer exponents has been defined for positive real bases only. For other bases, difficulties appear already with the apparently simple case of th roots, that is, of exponents

1/n,

where is a positive integer. Although the general theory of exponentiation with non-integer exponents applies to th roots, this case deserves to be considered first, since it does not need to use complex logarithms, and is therefore easier to understand.

th roots of a complex number

Every nonzero complex number may be written in polar form as

z=\rhoei\theta=\rho(\cos\theta+i\sin\theta),

where

\rho

is the absolute value of, and

\theta

is its argument. The argument is defined up to an integer multiple of ; this means that, if

\theta

is the argument of a complex number, then

\theta+2k\pi

is also an argument of the same complex number for every integer

k

.

The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of an th root of a complex number can be obtained by taking the th root of the absolute value and dividing its argument by :

\left(\rhoei\theta

1n=\sqrt[n]\rho
e
\right)
i\theta
n.

If

2\pi

is added to

\theta

, the complex number is not changed, but this adds

2i\pi/n

to the argument of the th root, and provides a new th root. This can be done times, and provides the th roots of the complex number.

It is usual to choose one of the th root as the principal root. The common choice is to choose the th root for which

-\pi<\theta\le\pi,

that is, the th root that has the largest real part, and, if there are two, the one with positive imaginary part. This makes the principal th root a continuous function in the whole complex plane, except for negative real values of the radicand. This function equals the usual th root for positive real radicands. For negative real radicands, and odd exponents, the principal th root is not real, although the usual th root is real. Analytic continuation shows that the principal th root is the unique complex differentiable function that extends the usual th root to the complex plane without the nonpositive real numbers.

If the complex number is moved around zero by increasing its argument, after an increment of

2\pi,

the complex number comes back to its initial position, and its th roots are permuted circularly (they are multiplied by e^). This shows that it is not possible to define a th root function that is continuous in the whole complex plane.

Roots of unity

See main article: Root of unity.

The th roots of unity are the complex numbers such that, where is a positive integer. They arise in various areas of mathematics, such as in discrete Fourier transform or algebraic solutions of algebraic equations (Lagrange resolvent).

The th roots of unity are the first powers of

\omega

2\pii
n
=e
, that is

1=\omega0=\omegan,\omega=\omega1,\omega2,\omegan-1.

The th roots of unity that have this generating property are called primitive th roots of unity; they have the form

\omegak=e

2k\pii
n

,

with coprime with . The unique primitive square root of unity is

-1;

the primitive fourth roots of unity are

i

and

-i.

The th roots of unity allow expressing all th roots of a complex number as the products of a given th roots of with a th root of unity.

Geometrically, the th roots of unity lie on the unit circle of the complex plane at the vertices of a regular -gon with one vertex on the real number 1.

As the number

2k\pii
n
e
is the primitive th root of unity with the smallest positive argument, it is called the principal primitive th root of unity, sometimes shortened as principal th root of unity, although this terminology can be confused with the principal value of

11/n

, which is 1.[23] [24]

Complex exponentiation

Defining exponentiation with complex bases leads to difficulties that are similar to those described in the preceding section, except that there are, in general, infinitely many possible values for z^w. So, either a principal value is defined, which is not continuous for the values of that are real and nonpositive, or z^w is defined as a multivalued function.

In all cases, the complex logarithm is used to define complex exponentiation as

zw=ewlog,

where

logz

is the variant of the complex logarithm that is used, which is, a function or a multivalued function such that

elog=z

for every in its domain of definition.

Principal value

The principal value of the complex logarithm is the unique continuous function, commonly denoted

log,

such that, for every nonzero complex number,

elog=z,

and the argument of satisfies

-\pi<\operatorname{Arg}z\le\pi.

The principal value of the complex logarithm is not defined for

z=0,

it is discontinuous at negative real values of, and it is holomorphic (that is, complex differentiable) elsewhere. If is real and positive, the principal value of the complex logarithm is the natural logarithm:

logz=lnz.

The principal value of

zw

is defined as

zw=ewlog,

where

logz

is the principal value of the logarithm.

The function

(z,w)\tozw

is holomorphic except in the neighbourhood of the points where is real and nonpositive.

If is real and positive, the principal value of

zw

equals its usual value defined above. If

w=1/n,

where is an integer, this principal value is the same as the one defined above.

Multivalued function

In some contexts, there is a problem with the discontinuity of the principal values of

logz

and

zw

at the negative real values of . In this case, it is useful to consider these functions as multivalued functions.

If

logz

denotes one of the values of the multivalued logarithm (typically its principal value), the other values are

2ik\pi+logz,

where is any integer. Similarly, if

zw

is one value of the exponentiation, then the other values are given by

ew(2ik\pi=zwe2ik\pi,

where is any integer.

Different values of give different values of

zw

unless is a rational number, that is, there is an integer such that is an integer. This results from the periodicity of the exponential function, more specifically, that

ea=eb

if and only if

a-b

is an integer multiple of

2\pii.

If

w=mn
is a rational number with and coprime integers with

n>0,

then

zw

has exactly values. In the case

m=1,

these values are the same as those described in § th roots of a complex number. If is an integer, there is only one value that agrees with that of .

The multivalued exponentiation is holomorphic for

z\ne0,

in the sense that its graph consists of several sheets that define each a holomorphic function in the neighborhood of every point. If varies continuously along a circle around, then, after a turn, the value of

zw

has changed of sheet.

Computation

The canonical form

x+iy

of

zw

can be computed from the canonical form of and . Although this can be described by a single formula, it is clearer to split the computation in several steps.

z=a+ib

is the canonical form of (and being real), then its polar form is z=\rho e^= \rho (\cos\theta + i \sin\theta), where

\rho=\sqrt{a2+b2}

and

\theta=\operatorname{atan2}(a,b)

(see atan2 for the definition of this function).

logz=ln\rho+i\theta,

where

ln

denotes the natural logarithm. The other values of the logarithm are obtained by adding

2ik\pi

for any integer .

wlogz.

If

w=c+di

with and real, the values of

wlogz

are w\log z = (c\ln \rho - d\theta-2dk\pi) +i (d\ln \rho + c\theta+2ck\pi), the principal value corresponding to

k=0.

ex+y=exey

and

eyln=xy,

one gets z^w=\rho^c e^ \left(\cos (d\ln \rho + c\theta+2ck\pi) +i\sin(d\ln \rho + c\theta+2ck\pi)\right), with

k=0

for the principal value.
Examples

ii


The polar form of is

i=ei\pi/2,

and the values of

logi

are thus \log i=i\left(\frac \pi 2 +2k\pi\right). It follows that i^i=e^=e^ e^.So, all values of

ii

are real, the principal one being e^ \approx 0.2079.

(-2)3+4i


Similarly, the polar form of is

-2=2ei.

So, the above described method gives the values \begin(-2)^ &= 2^3 e^ (\cos(4\ln 2 + 3(\pi +2k\pi)) +i\sin(4\ln 2 + 3(\pi+2k\pi)))\\&=-2^3 e^(\cos(4\ln 2) +i\sin(4\ln 2)).\endIn this case, all the values have the same argument

4ln2,

and different absolute values.

In both examples, all values of

zw

have the same argument. More generally, this is true if and only if the real part of is an integer.

Failure of power and logarithm identities

Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example:

Irrationality and transcendence

See main article: Gelfond–Schneider theorem. If is a positive real algebraic number, and is a rational number, then is an algebraic number. This results from the theory of algebraic extensions. This remains true if is any algebraic number, in which case, all values of (as a multivalued function) are algebraic. If is irrational (that is, not rational), and both and are algebraic, Gelfond–Schneider theorem asserts that all values of are transcendental (that is, not algebraic), except if equals or .

In other words, if is irrational and

b\not\in\{0,1\},

then at least one of, and is transcendental.

Integer powers in algebra

The definition of exponentiation with positive integer exponents as repeated multiplication may apply to any associative operation denoted as a multiplication.[25] The definition of requires further the existence of a multiplicative identity.[26]

An algebraic structure consisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by is a monoid. In such a monoid, exponentiation of an element is defined inductively by

x0=1,

xn+1=xxn

for every nonnegative integer .

If is a negative integer,

xn

is defined only if has a multiplicative inverse.[27] In this case, the inverse of is denoted, and is defined as

\left(x-1\right)-n.

Exponentiation with integer exponents obeys the following laws, for and in the algebraic structure, and and integers:

\begin{align} x0&=1\\ xm+n&=xmxn\\ (xm)n&=xmn\\ (xy)n&=xnynifxy=yx,and,inparticular,ifthemultiplicationiscommutative. \end{align}

These definitions are widely used in many areas of mathematics, notably for groups, rings, fields, square matrices (which form a ring). They apply also to functions from a set to itself, which form a monoid under function composition. This includes, as specific instances, geometric transformations, and endomorphisms of any mathematical structure.

When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, if is a real function whose valued can be multiplied,

fn

denotes the exponentiation with respect of multiplication, and

f\circ

may denote exponentiation with respect of function composition. That is,

(fn)(x)=(f(x))n=f(x)f(x)f(x),

and

(f\circ)(x)=f(f(f(f(x)))).

Commonly,

(fn)(x)

is denoted

f(x)n,

while

(f\circ)(x)

is denoted

fn(x).

In a group

A multiplicative group is a set with as associative operation denoted as multiplication, that has an identity element, and such that every element has an inverse.

So, if is a group,

xn

is defined for every

x\inG

and every integer .

\Z

of the integers. Otherwise, the cyclic group is finite (it has a finite number of elements), and its number of elements is the order of . If the order of is, then

xn=x0=1,

and the cyclic group generated by consists of the first powers of (starting indifferently from the exponent or).

Order of elements play a fundamental role in group theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (the order of the group). The possible orders of group elements are important in the study of the structure of a group (see Sylow theorems), and in the classification of finite simple groups.

Superscript notation is also used for conjugation; that is,, where and are elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely

(gh)k=ghk

and

(gh)k=gkhk.

In a ring

In a ring, it may occur that some nonzero elements satisfy

xn=0

for some integer . Such an element is said to be nilpotent. In a commutative ring, the nilpotent elements form an ideal, called the nilradical of the ring.

If the nilradical is reduced to the zero ideal (that is, if

x0

implies

xn0

for every positive integer), the commutative ring is said to be reduced. Reduced rings are important in algebraic geometry, since the coordinate ring of an affine algebraic set is always a reduced ring.

k[x1,\ldots,xn]

over a field, an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence of Hilbert's Nullstellensatz).

Matrices and linear operators

If is a square matrix, then the product of with itself times is called the matrix power. Also

A0

is defined to be the identity matrix,[28] and if is invertible, then

A-n=\left(A-1\right)n

.

Matrix powers appear often in the context of discrete dynamical systems, where the matrix expresses a transition from a state vector of some system to the next state of the system. This is the standard interpretation of a Markov chain, for example. Then

A2x

is the state of the system after two time steps, and so forth:

Anx

is the state of the system after time steps. The matrix power

An

is the transition matrix between the state now and the state at a time steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors.

Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus,

d/dx

, which is a linear operator acting on functions

f(x)

to give a new function

(d/dx)f(x)=f'(x)

. The th power of the differentiation operator is the th derivative:
\left(d
dx

\right)nf(x)=

dn
dxn

f(x)=f(n)(x).

These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups.[29] Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.

Finite fields

See main article: Finite field. A field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of . Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite.

A finite field is a field with a finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form

q=pk,

where is a prime number, and is a positive integer. For every such, there are fields with elements. The fields with elements are all isomorphic, which allows, in general, working as if there were only one field with elements, denoted

Fq.

One has

xq=x

for every

x\inFq.

A primitive element in

Fq

is an element such that the set of the first powers of (that is,

\{g1=g,g2,\ldots,gp-1=g0=1\}

) equals the set of the nonzero elements of

Fq.

There are

\varphi(p-1)

primitive elements in

Fq,

where

\varphi

is Euler's totient function.

In

Fq,

the freshman's dream identity

(x+y)p=xp+yp

is true for the exponent . As

xp=x

in

Fq,

It follows that the map

\begin{align} F\colon{}&Fq\toFq\\ &x\mapstoxp \end{align}

is linear over

Fq,

and is a field automorphism, called the Frobenius automorphism. If

q=pk,

the field

Fq

has automorphisms, which are the first powers (under composition) of . In other words, the Galois group of

Fq

is cyclic of order, generated by the Frobenius automorphism.

The Diffie–Hellman key exchange is an application of exponentiation in finite fields that is widely used for secure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, the discrete logarithm, is computationally expensive. More precisely, if is a primitive element in

Fq,

then

ge

can be efficiently computed with exponentiation by squaring for any, even if is large, while there is no known computationally practical algorithm that allows retrieving from

ge

if is sufficiently large.

Powers of sets

The Cartesian product of two sets and is the set of the ordered pairs

(x,y)

such that

x\inS

and

y\inT.

This operation is not properly commutative nor associative, but has these properties up to canonical isomorphisms, that allow identifying, for example,

(x,(y,z)),

((x,y),z),

and

(x,y,z).

This allows defining the th power

Sn

of a set as the set of all -tuples

(x1,\ldots,xn)

of elements of .

When is endowed with some structure, it is frequent that

Sn

is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For example

\Rn

(where

\R

denotes the real numbers) denotes the Cartesian product of copies of

\R,

as well as their direct product as vector space, topological spaces, rings, etc.

Sets as exponents

A -tuple

(x1,\ldots,xn)

of elements of can be considered as a function from

\{1,\ldots,n\}.

This generalizes to the following notation.

Given two sets and, the set of all functions from to is denoted

ST

. This exponential notation is justified by the following canonical isomorphisms (for the first one, see Currying):

(ST)U\congST x ,

ST\sqcup\congST x SU,

where

x

denotes the Cartesian product, and

\sqcup

the disjoint union.

One can use sets as exponents for other operations on sets, typically for direct sums of abelian groups, vector spaces, or modules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example,

\R\N

denotes the vector space of the infinite sequences of real numbers, and

\R(\N)

the vector space of those sequences that have a finite number of nonzero elements. The latter has a basis consisting of the sequences with exactly one nonzero element that equals, while the Hamel bases of the former cannot be explicitly described (because their existence involves Zorn's lemma).

In this context, can represents the set

\{0,1\}.

So,

2S

denotes the power set of, that is the set of the functions from to

\{0,1\},

which can be identified with the set of the subsets of, by mapping each function to the inverse image of .

This fits in with the exponentiation of cardinal numbers, in the sense that, where is the cardinality of .

In category theory

See main article: Cartesian closed category. In the category of sets, the morphisms between sets and are the functions from to . It results that the set of the functions from to that is denoted

YX

in the preceding section can also be denoted

\hom(X,Y).

The isomorphism

(ST)U\congST x

can be rewritten

\hom(U,ST)\cong\hom(T x U,S).

This means the functor "exponentiation to the power " is a right adjoint to the functor "direct product with ".

This generalizes to the definition of exponentiation in a category in which finite direct products exist: in such a category, the functor

X\toXT

is, if it exists, a right adjoint to the functor

Y\toT x Y.

A category is called a Cartesian closed category, if direct products exist, and the functor

Y\toX x Y

has a right adjoint for every .

Repeated exponentiation

See main article: Tetration and Hyperoperation. Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at, the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and respectively.

Limits of powers

Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function has no limit at the point . One may consider at what points this function does have a limit.

More precisely, consider the function

f(x,y)=xy

defined on

D=\{(x,y)\inR2:x>0\}

. Then can be viewed as a subset of (that is, the set of all pairs with, belonging to the extended real number line, endowed with the product topology), which will contain the points at which the function has a limit.

In fact, has a limit at all accumulation points of, except for,, and .[30] Accordingly, this allows one to define the powers by continuity whenever,, except for,, and, which remain indeterminate forms.

Under this definition by continuity, we obtain:

These powers are obtained by taking limits of for positive values of . This method does not permit a definition of when, since pairs with are not accumulation points of .

On the other hand, when is an integer, the power is already meaningful for all values of, including negative ones. This may make the definition obtained above for negative problematic when is odd, since in this case as tends to through positive values, but not negative ones.

Efficient computation with integer exponents

Computing using iterated multiplication requires multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute, apply Horner's rule to the exponent 100 written in binary:

100=22+25+26=22(1+23(1+2))

.Then compute the following terms in order, reading Horner's rule from right to left.
22 = 4
2 (22) = 23 = 8
(23)2 = 26 = 64
(26)2 = 212 =
(212)2 = 224 =
2 (224) = 225 =
(225)2 = 250 =
(250)2 = 2100 =
This series of steps only requires 8 multiplications instead of 99.

In general, the number of multiplication operations required to compute can be reduced to

\sharpn+\lfloorlog2n\rfloor-1,

by using exponentiation by squaring, where

\sharpn

denotes the number of s in the binary representation of . For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimal addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available.[31] However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement.

Iterated functions

See also: Iterated function. Function composition is a binary operation that is defined on functions such that the codomain of the function written on the right is included in the domain of the function written on the left. It is denoted

g\circf,

and defined as

(g\circf)(x)=g(f(x))

for every in the domain of .

If the domain of a function equals its codomain, one may compose the function with itself an arbitrary number of time, and this defines the th power of the function under composition, commonly called the th iterate of the function. Thus

fn

denotes generally the th iterate of ; for example,

f3(x)

means

f(f(f(x))).

When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, the pointwise multiplication, which induces another exponentiation. When using functional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iteration before the parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplication after the parentheses. Thus

f2(x)=f(f(x)),

and

f(x)2=f(x)f(x).

When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for example

f\circ=f\circf\circf,

and

f3=fff.

For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically the trigonometric functions. So,

\sin2x

and

\sin2(x)

both mean

\sin(x)\sin(x)

and not

\sin(\sin(x)),

which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors.

In this context, the exponent

-1

denotes always the inverse function, if it exists. So

\sin-1x=\sin-1(x)=\arcsinx.

For the multiplicative inverse fractions are generally used as in
1/\sin(x)=1{\sin
x}.

In programming languages

Programming languages generally express exponentiation either as an infix operator or as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is the caret (^). The original version of ASCII included an uparrow symbol (), intended for exponentiation, but this was replaced by the caret in 1967, so the caret became usual in programming languages.[32] The notations include:

In most programming languages with an infix exponentiation operator, it is right-associative, that is, a^b^c is interpreted as a^(b^c).[35] This is because (a^b)^c is equal to a^(b*c) and thus not as useful. In some languages, it is left-associative, notably in Algol, MATLAB, and the Microsoft Excel formula language.

Other programming languages use functional notation:

Still others only provide exponentiation as part of standard libraries:

In some statically typed languages that prioritize type safety such as Rust, exponentiation is performed via a multitude of methods:

See also

Notes and References

  1. Web site: Nykamp . Duane . Basic rules for exponentiation . Math Insight . August 27, 2020.
  2. There are three common notations for multiplication:

    x x y

    is most commonly used for explicit numbers and at a very elementary level;

    xy

    is most common when variables are used;

    xy

    is used for emphasizing that one talks of multiplication or when omitting the multiplication sign would be confusing.
  3. Web site: Exponent | Etymology of exponent by etymonline .
  4. Book: Rotman, Joseph J. . Joseph J. Rotman . 2015 . Advanced Modern Algebra, Part 1 . Providence, RI . . p. 130, fn. 4 . 978-1-4704-1554-9 . 3rd . . 165 . subscription.
  5. Book: Szabó, Árpád . 1978 . The Beginnings of Greek Mathematics . Dordrecht . . 37 . 90-277-0819-3 . Synthese Historical Library . 17 . A.M. Ungar .
  6. Book: Ball, W. W. Rouse . W. W. Rouse Ball . 1915 . A Short Account of the History of Mathematics . London . . 38 . 6th .
  7. Archimedes. (2009). THE SAND-RECKONER. In T. Heath (Ed.), The Works of Archimedes: Edited in Modern Notation with Introductory Chapters (Cambridge Library Collection - Mathematics, pp. 229-232). Cambridge: Cambridge University Press. .
  8. Book: Cajori, Florian . A History of Mathematical Notations . 1928 . The Open Court Company . 1 . 102 .
  9. Book: Cajori, Florian . Florian Cajori . 1928 . A History of Mathematical Notations . London . . 344 . 1 .
  10. Web site: Earliest Known Uses of Some of the Words of Mathematics (E). June 23, 2017.
  11. Book: Stifel, Michael . Michael Stifel . 1544 . Arithmetica integra . Nuremberg . . 235v .
  12. Web site: Zenzizenzizenzic . World Wide Words . Quinion . Michael . Michael Quinion . 2020-04-16.
  13. Book: Cajori, Florian . A History of Mathematical Notations . 1928 . The Open Court Company . 1 . 204 .
  14. Book: Descartes, René . René Descartes . 1637 . Discourse de la méthode [...] ]. Leiden . Jan Maire . 299 . La Géométrie . Et aa, ou, pour multiplier par soy mesme; Et, pour le multiplier encore une fois par, & ainsi a l'infini. (And, or, in order to multiply by itself; and, in order to multiply it once more by, and thus to infinity).
  15. The most recent usage in this sense cited by the OED is from 1806 .
  16. Book: Abstract Algebra: an inquiry based approach . Hodge . Jonathan K. . Schlicker . Steven . Sundstorm . Ted . 94 . 2014 . CRC Press . 978-1-4665-6706-1 .
  17. Book: Technical Shop Mathematics . Achatz . Thomas . 101 . 2005 . 3rd . Industrial Press . 978-0-8311-3086-2 .
  18. Book: Knobloch, Eberhard . Eberhard Knobloch . Kostas Gavroglu . Jean Christianidis . Efthymios Nicolaidis . The infinite in Leibniz’s mathematics – The historiographical method of comprehension in context . 10.1007/978-94-017-3596-4_20 . 9789401735964 . Springer Netherlands . Boston Studies in the Philosophy of Science . Trends in the Historiography of Science . 1994 . 276 . A positive power of zero is infinitely small, a negative power of zero is infinite..
  19. Book: Hass . Joel R. . Heil . Christopher E. . Weir . Maurice D. . Thomas . George B. . Thomas' Calculus . 2018 . Pearson . 9780134439020 . 7–8 . 14.
  20. Book: Anton . Howard . Bivens . Irl . Davis . Stephen . Calculus: Early Transcendentals . 2012 . John Wiley & Sons . 28 . 9780470647691 . 9th . limited.
  21. Book: Denlinger, Charles G. . Elements of Real Analysis . Jones and Bartlett . 2011 . 278–283 . 978-0-7637-7947-4.
  22. Book: Limits of sequences . . Analysis I . Texts and Readings in Mathematics . 2016 . Tao . Terence . 37 . 126–154 . 978-981-10-1789-6 . 10.1007/978-981-10-1789-6_6.
  23. Book: Introduction to Algorithms . second . Cormen . Thomas H. . Leiserson . Charles E. . Rivest . Ronald L. . Stein . Clifford . . 2001 . 978-0-262-03293-3. Online resource .
  24. Book: Difference Equations: From Rabbits to Chaos . Difference Equations: From Rabbits to Chaos . . Cull . Paul . Flahive . Mary . Mary Flahive . Robson . Robby . 2005 . Springer . 978-0-387-23234-8. Defined on p. 351.
  25. More generally, power associativity is sufficient for the definition.
  26. Book: Bourbaki . Nicolas . Algèbre . 1970 . Springer. I.2.
  27. Book: Bloom . David M. . Linear Algebra and Geometry . 1979 . 978-0-521-29324-2 . 45 . Cambridge University Press . registration.
  28. Chapter 1, Elementary Linear Algebra, 8E, Howard Anton.
  29. E. Hille, R. S. Phillips: Functional Analysis and Semi-Groups. American Mathematical Society, 1975.
  30. Nicolas Bourbaki, Topologie générale, V.4.2.
  31. Gordon . D. M. . A Survey of Fast Exponentiation Methods . Journal of Algorithms . 27 . 129–146 . 1998 . 10.1.1.17.7076 . 10.1006/jagm.1997.0913 . 2024-01-11 . 2018-07-23 . https://web.archive.org/web/20180723164121/http://www.ccrwest.org/gordon/jalg.pdf . dead .
  32. Book: Richard Gillam. Unicode Demystified: A Practical Programmer's Guide to the Encoding Standard. 2003. 0201700522. 33.
  33. News: BASCOM - A BASIC compiler for TRS-80 I and II . Daneliuk . Timothy "Tim" A. . 1982-08-09 . . Software Reviews . . 4 . 31 . 41–42 . 2020-02-06 . live . https://web.archive.org/web/20200207104336/https://books.google.de/books?id=NDAEAAAAMBAJ&pg=PA42&focus=viewport#v=onepage&q=TRS-80%20exponention . 2020-02-07.
  34. 80 Contents . . . 0744-7868 . October 1983 . 45 . 5 . 2020-02-06.
  35. Book: Robert W. Sebesta. Concepts of Programming Languages. 2010. 0136073476. 130, 324.