Partial fraction decomposition explained

In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator.[1]

The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives,[2] Taylor series expansions, inverse Z-transforms, and inverse Laplace transforms. The concept was discovered independently in 1702 by both Johann Bernoulli and Gottfried Leibniz.[3]

In symbols, the partial fraction decomposition of a rational fraction of the form \frac, where and are polynomials, is its expression as

\frac=p(x) + \sum_j \frac

where is a polynomial, and, for each,the denominator is a power of an irreducible polynomial (that is not factorable into polynomials of positive degrees), andthe numerator is a polynomial of a smaller degree than the degree of this irreducible polynomial.

When explicit computation is involved, a coarser decomposition is often preferred, which consists of replacing "irreducible polynomial" by "square-free polynomial" in the description of the outcome. This allows replacing polynomial factorization by the much easier-to-compute square-free factorization. This is sufficient for most applications, and avoids introducing irrational coefficients when the coefficients of the input polynomials are integers or rational numbers.

Basic principles

Let R(x) = \frac FG be a rational fraction, where and are univariate polynomials in the indeterminate over a field. The existence of the partial fraction can be proved by applying inductively the following reduction steps.

Polynomial part

There exist two polynomials and such that \frac FG=E+\fracG,and\deg F_1 <\deg G,where

\degP

denotes the degree of the polynomial .

This results immediately from the Euclidean division of by, which asserts the existence of and such that

F=EG+F1

and

\degF1<\degG.

This allows supposing in the next steps that

\degF<\degG.

Factors of the denominator

If

\degF<\degG,

and G = G_1 G_2,where and are coprime polynomials, then there exist polynomials

F1

and

F2

such that\frac FG=\frac+\frac,and\deg F_1 < \deg G_1\quad\text\quad\deg F_2 < \deg G_2.

This can be proved as follows. Bézout's identity asserts the existence of polynomials and such that CG_1 + DG_2 = 1(by hypothesis, is a greatest common divisor of and).

Let

DF=G1Q+F1

with

\degF1<\degG1

be the Euclidean division of by

G1.

Setting

F2=CF+QG2,

one gets\begin\frac FG&=\frac=\frac+\frac\\&=\frac+\frac\\&=\frac+\frac.\endIt remains to show that

\degF2<\degG2.

By reducing the last sum of fractions to a common denominator, one gets

F=F2G1+F1G2,

and thus\begin\deg F_2 &=\deg(F-F_1G_2)-\deg G_1 \le \max(\deg F,\deg (F_1G_2))-\deg G_1\\&< \max(\deg G,\deg(G_1G_2))-\deg G_1= \deg G_2 \end

Powers in the denominator

Using the preceding decomposition inductively one gets fractions of the form

F
Gk

,

with

\degF<\degGk=k\degG,

where is an irreducible polynomial. If, one can decompose further, by using that an irreducible polynomial is a square-free polynomial, that is,

1

is a greatest common divisor of the polynomial and its derivative. If

G'

is the derivative of, Bézout's identity provides polynomials and such that

CG+DG'=1

and thus

F=FCG+FDG'.

Euclidean division of

FDG'

by

G

gives polynomials

Hk

and

Q

such that

FDG'=QG+Hk

and

\degHk<\degG.

Setting

Fk-1=FC+Q,

one gets\frac F = \frac+\frac,with

\degHk<\degG.

Iterating this process with

Fk-1
Gk-1
in place of
F{G
k}
leads eventually to the following theorem.

Statement

The uniqueness can be proved as follows. Let . All together, and the have coefficients. The shape of the decomposition defines a linear map from coefficient vectors to polynomials of degree less than . The existence proof means that this map is surjective. As the two vector spaces have the same dimension, the map is also injective, which means uniqueness of the decomposition. By the way, this proof induces an algorithm for computing the decomposition through linear algebra.

If is the field of complex numbers, the fundamental theorem of algebra implies that all have degree one, and all numerators

aij

are constants. When is the field of real numbers, some of the may be quadratic, so, in the partial fraction decomposition, quotients of linear polynomials by powers of quadratic polynomials may also occur.

In the preceding theorem, one may replace "distinct irreducible polynomials" by "pairwise coprime polynomials that are coprime with their derivative". For example, the may be the factors of the square-free factorization of . When is the field of rational numbers, as it is typically the case in computer algebra, this allows to replace factorization by greatest common divisor computation for computing a partial fraction decomposition.

Application to symbolic integration

For the purpose of symbolic integration, the preceding result may be refined into

This reduces the computation of the antiderivative of a rational function to the integration of the last sum, which is called the logarithmic part, because its antiderivative is a linear combination of logarithms.

There are various methods to compute decomposition in the Theorem. One simple way is called Hermite's method. First, b is immediately computed by Euclidean division of f by g, reducing to the case where deg(f) < deg(g). Next, one knows deg(cij) < deg(pi), so one may write each cij as a polynomial with unknown coefficients. Reducing the sum of fractions in the Theorem to a common denominator, and equating the coefficients of each power of x in the two numerators, one gets a system of linear equations which can be solved to obtain the desired (unique) values for the unknown coefficients.

Procedure

Given two polynomials

P(x)

and

Q(x)=(x-\alpha1)(x-\alpha2)(x-\alphan)

, where the αn are distinct constants and, explicit expressions for partial fractions can be obtained by supposing that\frac = \frac + \frac + \cdots + \fracand solving for the ci constants, by substitution, by equating the coefficients of terms involving the powers of x, or otherwise. (This is a variant of the method of undetermined coefficients. After both sides of the equation are multiplied by Q(x), one side of the equation is a specific polynomial, and the other side is a polynomial with undetermined coefficients. The equality is possible only when the coefficients of like powers of x are equal. This yields n equations in n unknowns, the ck.)

A more direct computation, which is strongly related to Lagrange interpolation, consists of writing\frac = \sum_^n \frac\frac where

Q'

is the derivative of the polynomial

Q

. The coefficients of

\tfrac{1}{x-\alphaj}

are called the residues of f/g.

This approach does not account for several other cases, but can be modified accordingly:

\degP\geq\degQ,

then it is necessary to perform the Euclidean division of P by Q, using polynomial long division, giving with . Dividing by Q(x) this gives \frac = E(x) + \frac, and then seek partial fractions for the remainder fraction (which by definition satisfies).

Illustration

In an example application of this procedure, can be decomposed in the form

\frac = \frac + \frac.

Clearing denominators shows that . Expanding and equating the coefficients of powers of gives

Solving this system of linear equations for and yields . Hence,

\frac = \frac + \frac.

Residue method

See also: Heaviside cover-up method. Over the complex numbers, suppose f(x) is a rational proper fraction, and can be decomposed into

f(x) = \sum_i \left(\frac + \frac + \cdots + \frac \right).

Let g_(x) = (x - x_i)^f(x),then according to the uniqueness of Laurent series, aij is the coefficient of the term in the Laurent expansion of gij(x) about the point xi, i.e., its residuea_ = \operatorname(g_,x_i).

This is given directly by the formulaa_ = \frac 1 \lim_\frac \left((x-x_i)^ f(x)\right),or in the special case when xi is a simple root,a_=\frac,whenf(x)=\frac.

Over the reals

Partial fractions are used in real-variable integral calculus to find real-valued antiderivatives of rational functions. Partial fraction decomposition of real rational functions is also used to find their Inverse Laplace transforms. For applications of partial fraction decomposition over the reals, see

General result

Let

f(x)

be any rational function over the real numbers. In other words, suppose there exist real polynomials functions

p(x)

and

q(x)0

, such thatf(x) = \frac

By dividing both the numerator and the denominator by the leading coefficient of

q(x)

, we may assume without loss of generality that

q(x)

is monic. By the fundamental theorem of algebra, we can write

q(x) = (x-a_1)^\cdots(x-a_m)^(x^2+b_1x+c_1)^\cdots(x^2 + b_n x + c_n)^

where

a1,...,am

,

b1,...,bn

,

c1,...,cn

are real numbers with
2
b
i

-4ci<0

, and

j1,...,jm

,

k1,...,kn

are positive integers. The terms

(x-ai)

are the linear factors of

q(x)

which correspond to real roots of

q(x)

, and the terms

(

2
x
i

+bix+ci)

are the irreducible quadratic factors of

q(x)

which correspond to pairs of complex conjugate roots of

q(x)

.

Then the partial fraction decomposition of

f(x)

is the following:

f(x) = \frac = P(x) + \sum_^m\sum_^ \frac + \sum_^n\sum_^ \frac

Here, P(x) is a (possibly zero) polynomial, and the Air, Bir, and Cir are real constants. There are a number of ways the constants can be found.

The most straightforward method is to multiply through by the common denominator q(x). We then obtain an equation of polynomials whose left-hand side is simply p(x) and whose right-hand side has coefficients which are linear expressions of the constants Air, Bir, and Cir. Since two polynomials are equal if and only if their corresponding coefficients are equal, we can equate the coefficients of like terms. In this way, a system of linear equations is obtained which always has a unique solution. This solution can be found using any of the standard methods of linear algebra. It can also be found with limits (see Example 5).

Examples

Example 1

f(x)=\frac

Here, the denominator splits into two distinct linear factors:

q(x)=x^2+2x-3=(x+3)(x-1)

so we have the partial fraction decomposition

f(x)=\frac =\frac+\frac

Multiplying through by the denominator on the left-hand side gives us the polynomial identity

1=A(x-1)+B(x+3)

Substituting x = −3 into this equation gives A = −1/4, and substituting x = 1 gives B = 1/4, so that

f(x) =\frac =\frac\left(\frac+\frac\right)

Example 2

f(x)=\frac

After long division, we have

f(x)=1+\frac=1+\frac

The factor x2 − 4x + 8 is irreducible over the reals, as its discriminant is negative. Thus the partial fraction decomposition over the reals has the shape

\frac=\frac+\frac

Multiplying through by x3 − 4x2 + 8x, we have the polynomial identity

4x^2-8x+16 = A \left(x^2-4x+8\right) + \left(Bx+C\right)x

Taking x = 0, we see that 16 = 8A, so A = 2. Comparing the x2 coefficients, we see that 4 = A + B = 2 + B, so B = 2. Comparing linear coefficients, we see that −8 = −4A + C = −8 + C, so C = 0. Altogether,

f(x)=1+2\left(\frac+\frac\right)

The fraction can be completely decomposed using complex numbers. According to the fundamental theorem of algebra every complex polynomial of degree n has n (complex) roots (some of which can be repeated). The second fraction can be decomposed to:

\frac=\frac+\frac

Multiplying through by the denominator gives:

x=D(x-(2-2i))+E(x-(2+2i))

Equating the coefficients of and the constant (with respect to) coefficients of both sides of this equation, one gets a system of two linear equations in and, whose solution is

D=\frac=\frac, \qquad E=\frac=\frac.

Thus we have a complete decomposition:

f(x)=\frac=1+\frac+\frac+\frac

One may also compute directly and with the residue method (see also example 4 below).

Example 3

This example illustrates almost all the "tricks" we might need to use, short of consulting a computer algebra system.

f(x)=\frac

After long division and factoring the denominator, we have

f(x)=x^2+3x+4+\frac

The partial fraction decomposition takes the form

\frac = \frac+\frac+\frac+\frac+\frac.

Multiplying through by the denominator on the left-hand side we have the polynomial identity

\begin&2x^6 - 4x^5 + 5x^4 - 3x^3 + x^2 + 3x \\[4pt] =&A\left(x-1\right)^2 \left(x^2+1\right)^2+B\left(x-1\right)\left(x^2+1\right)^2 +C\left(x^2+1\right)^2 + \left(Dx+E\right)\left(x-1\right)^3\left(x^2+1\right)+\left(Fx+G\right)\left(x-1\right)^3\end

Now we use different values of x to compute the coefficients:

\begin 4 = 4C & x =1 \\ 2 + 2i = (Fi + G) (2+ 2i) & x = i \\ 0 = A- B +C - E - G & x = 0 \end

Solving this we have:

\begin C = 1 \\ F =0, G =1 \\ E = A-B\end

Using these values we can write:

\begin&2x^6-4x^5+5x^4-3x^3+x^2+3x \\[4pt]=& A\left(x-1\right)^2 \left(x^2+1\right)^2 + B\left(x-1\right)\left(x^2+1\right)^2 + \left(x^2 + 1\right)^2 + \left(Dx + \left(A-B\right)\right)\left(x-1\right)^3 \left(x^2+1\right) + \left(x-1\right)^3 \\[4pt]=& \left(A + D\right) x^6 + \left(-A - 3D\right) x^5 + \left(2B + 4D + 1\right) x^4 + \left(-2B - 4D + 1\right) x^3 + \left(-A + 2B + 3D - 1\right) x^2 + \left(A - 2B - D + 3\right) x\end

We compare the coefficients of x6 and x5 on both side and we have:

\begin A+D=2 \\ -A-3D = -4 \end \quad \Rightarrow \quad A= D = 1.

Therefore:

2x^6-4x^5+5x^4-3x^3+x^2+3x = 2x^6 -4x^5 + (2B + 5) x^4 + (-2B - 3) x^3 + (2B +1) x^2 + (- 2B + 3) x

which gives us B = 0. Thus the partial fraction decomposition is given by:

f(x)=x^2+3x+4+\frac + \frac + \frac+\frac.

Alternatively, instead of expanding, one can obtain other linear dependences on the coefficients computing some derivatives at

x=1,\imath

in the above polynomial identity. (To this end, recall that the derivative at x = a of (xa)mp(x) vanishes if m > 1 and is just p(a) for m = 1.) For instance the first derivative at x = 1 gives

2\cdot6-4\cdot5+5\cdot4-3\cdot3+2+3 = A\cdot(0+0) + B\cdot(4+ 0) + 8 + D\cdot0

that is 8 = 4B + 8 so B = 0.

Example 4 (residue method)

f(z)=\frac=\frac

Thus, f(z) can be decomposed into rational functions whose denominators are z+1, z−1, z+i, z−i. Since each term is of power one, −1, 1, −i and i are simple poles.

Hence, the residues associated with each pole, given by\frac = \frac,are 1, -1, \tfrac, -\tfrac,respectively, and

f(z)=\frac-\frac+\frac\frac-\frac\frac.

Example 5 (limit method)

Limits can be used to find a partial fraction decomposition.[4] Consider the following example:

\frac

First, factor the denominator which determines the decomposition:

\frac = \frac = \frac + \frac.

Multiplying everything by

x-1

, and taking the limit when

x\to1

, we get

\lim_ \left((x-1)\left (\frac + \frac \right)\right) = \lim_ A + \lim_\frac =A.

On the other hand,

\lim_ \frac = \lim_\frac = \frac13,

and thus:

A = \frac.

Multiplying by and taking the limit when

x\toinfty

, we have

\lim_ x\left(\frac + \frac \right)= \lim_ \frac + \lim_ \frac= A+B,

and

\lim_ \frac =0.

This implies and so

B=-

1
3
.

For, we get

-1=-A+C,

and thus

C=-\tfrac{2}{3}

.

Putting everything together, we get the decomposition

\frac = \frac \left(\frac + \frac \right).

Example 6 (integral)

Suppose we have the indefinite integral:

\int \frac \,dx

Before performing decomposition, it is obvious we must perform polynomial long division and factor the denominator. Doing this would result in:

\int \left(x^2 + 3 + \frac\right) dx

Upon this, we may now perform partial fraction decomposition.

\int \left(x^2+3+ \frac\right) dx = \int \left(x^2+3+ \frac+\frac\right) dxso:A(x-1)+B(x+2)=-3x+7.Upon substituting our values, in this case, where x=1 to solve for B and x=-2 to solve for A, we will result in:

A=\frac \, B=\frac

Plugging all of this back into our integral allows us to find the answer:

\int \left(x^2+3+ \frac+\frac\right) \,dx = \frac \ + 3x-\frac \ln(|x+2|)+\frac \ln(|x-1|)+C

The role of the Taylor polynomial

The partial fraction decomposition of a rational function can be related to Taylor's theorem as follows. Let

P(x), Q(x), A_1(x),\ldots, A_r(x)

be real or complex polynomials assume that

Q=\prod_^(x-\lambda_j)^,

satisfies\deg A_1<\nu_1, \ldots, \deg A_r<\nu_r, \quad \text \quad \deg(P)<\deg(Q)=\sum_^\nu_j.

Also define

Q_i=\prod_(x-\lambda_j)^=\frac, \qquad 1 \leqslant i \leqslant r.

Then we have

\frac=\sum_^\frac

if, and only if, each polynomial

Ai(x)

is the Taylor polynomial of

\tfrac{P}{Qi}

of order

\nui-1

at the point

λi

:

A_i(x):=\sum_^ \frac\left(\frac\right)^(\lambda_i)\ (x-\lambda_i)^k.

Taylor's theorem (in the real or complex case) then provides a proof of the existence and uniqueness of the partial fraction decomposition, and a characterization of the coefficients.

Sketch of the proof

The above partial fraction decomposition implies, for each 1 ≤ i ≤ r, a polynomial expansion

\frac=A_i + O((x-\lambda_i)^), \qquad \text x\to\lambda_i,

so

Ai

is the Taylor polynomial of

\tfrac{P}{Qi}

, because of the unicity of the polynomial expansion of order

\nui-1

, and by assumption

\degAi<\nui

.

Conversely, if the

Ai

are the Taylor polynomials, the above expansions at each

λi

hold, therefore we also have

P-Q_i A_i = O((x-\lambda_i)^), \qquad \text x\to\lambda_i,

which implies that the polynomial

P-QiAi

is divisible by
\nui
(x
i)

.

For

ji,QjAj

is also divisible by
\nui
(x
i)
, so

P- \sum_^Q_jA_j

is divisible by

Q

. Since

\deg\left(P- \sum_^Q_jA_j \right) < \deg(Q)

we then have

P- \sum_^Q_jA_j=0,

and we find the partial fraction decomposition dividing by

Q

.

Fractions of integers

The idea of partial fractions can be generalized to other integral domains, say the ring of integers where prime numbers take the role of irreducible denominators. For example:

\frac = \frac - \frac - \frac.

References

External links

Notes and References

  1. Book: Larson . Ron . Algebra & Trigonometry . 2016 . Cengage Learning . 9781337271172 . en.
  2. Horowitz, Ellis. "Algorithms for partial fraction decomposition and rational function integration." Proceedings of the second ACM symposium on Symbolic and algebraic manipulation. ACM, 1971.
  3. Book: Grosholz, Emily . 2000 . The Growth of Mathematical Knowledge . Kluwer Academic Publilshers . 179 . 978-90-481-5391-6 .
  4. Book: Bluman, George W.. Problem Book for First Year Calculus. 1984. Springer-Verlag. New York. 250–251.