Polynomial evaluation explained

In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial

P(x1,x2)=2x1x2+

3
x
1

+4

at

x1=2,x2=3

consists of computing

P(2,3)=2 ⋅ 2 ⋅ 3+23+4=24.

See also
n+a
a
n-1

xn-1+ … +a0,

the most naive method would use

n

multiplications to compute

anxn

, use

n-1

multiplications to compute

an-1xn-1

and so on for a total of

\tfrac{n(n+1)}{2}

multiplications and

n

additions.Using better methods, such as Horner's rule, this can be reduced to

n

multiplications and

n

additions. If some preprocessing is allowed, even more savings are possible.

Background

This problem arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute k-independent hashing.

In the former case, polynomials are evaluated using floating-point arithmetic, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact.

General methods

Horner's rule

See also: Horner's method.

Horner's method evaluates a polynomial using repeated bracketing:\begina_0 + &a_1x + a_2x^2 + a_3x^3 + \cdots + a_nx^n \\ &= a_0 + x \bigg(a_1 + x \Big(a_2 + x \big(a_3 + \cdots + x(a_ + x\,a_n) \cdots \big) \Big) \bigg).\endThis method reduces the number of multiplications and additions to just

n

Horner's method is so common that a computer instruction "multiply–accumulate operation" has been added to many computer processors, which allow doing the addition and multiplication operations in one combined step.

Multivariate

If the polynomial is multivariate, Horner's rule can be applied recursively over some ordering of the variables.E.g.

P(x,y)=4+x+2xy+2x2y+x2y2

can be written as

\begin{align} P(x,y)&=4+x(1+y(2)+x(y(2+y))) or\\ P(x,y)&=4+x+y(x(2+x(2))+y(x2)). \end{align}

An efficient version of this approach was described by Carnicer and Gasca.[1]

Estrin's scheme

See also: Estrin's scheme.

While it's not possible to do less computation than Horner's rule (without preprocessing), on modern computers the order of evaluation can matter a lot for the computational efficiency.A method known as Estrin's scheme computes a (single variate) polynomial in a tree like pattern:

\beginP(x) = (a_0 + a_1 x) + (a_2 + a_3 x) x^2 + ((a_4 + a_5 x) + (a_6 + a_7 x) x^2)x^4.\end

Combined by Exponentiation by squaring, this allows parallelizing the computation.

Evaluation with preprocessing

Arbitrary polynomials can be evaluated with feweroperations than Horner's rule requires if we first "preprocess"the coefficients

an,...,a0

.

An example was first given by Motzkin[2] who noted that

P(x)=x4+a3x3+a2x2+a1x+a0

can be written as

y=(x+\beta0)x+\beta1,P(x)=(y+x+\beta2)y+\beta3,

where the values

\beta0,...,\beta3

are computed in advanced, based on

a0,...,a3

.Motzkin's method uses just 3 multiplications compared to Horner's 4.

The values for each

\betai

can be easily computed by expanding

P(x)

and equating the coefficients:

\begin{align} \beta0&=\tfrac12(a3-1), &z&=a2-\beta0(\beta0+1), &\beta1&=a1-\beta0z,\\ \beta2&=z-2\beta1, &\beta3&=a0-\beta1(\beta1+\beta2).\end{align}

Example

\exp(x)1+x+x2/2+x3/6+x4/24

,we can upscale by a factor 24, apply the above steps, and scale back down.That gives us the three multiplication computation

y=(x+1.5)x+11.625,P(x)=(y+x-15)y/24+2.63477.

Improving over the equivalent Horner form (that is

P(x)=1+x(1+x(1/2+x(1/6+x/24)))

) by 1 multiplication.

Some general methods include the Knuth–Eve algorithm and the Rabin–Winograd algorithm.

Multipoint evaluation

Evaluate of a

n

-degree polynomial

P(x)

in multiple points

x1,...,xm

can be done with

mn

multiplications by using Horner's method

m

times. Using above preprocessing approach, this can be reduced that by a factor of two, that is, to

mn/2

multiplications. However, it is possible to do better.

It is possible to reduce the time requirement to just

O((n+m)log2(n+m))

.[3] The idea is to define two polynomials that are zero in respectively the first and second half of the points:

m0(x)=(x-x1)(x-xn/2)

and

m1(x)=(x-xn/2+1)(x-xn)

.We then compute

R0=P\bmodm0

and

R1=P\bmodm1

using the Polynomial remainder theorem, which can be done in

O(nlogn)

time using a fast Fourier transform.This means

P(x)=Q(x)m0(x)+R0(x)

and

P(x)=Q(x)m1(x)+R1(x)

by construction, where

R0

and

R1

are polynomials of degree at most

n/2

.Because of how

m0

and

m1

were defined, we have

\begin{align} R0(xi)&=P(xi)fori\len/2and\\ R1(xi)&=P(xi)fori>n/2. \end{align}

Thus to compute

P

on all

n

of the

xi

, it suffices to compute the smaller polynomials

R0

and

R1

on each half of the points.This gives us a divide-and-conquer algorithm with

T(n)=2T(n/2)+nlogn

, which implies

T(n)=O(n(logn)2)

by the master theorem.

In the case where the points in which we wish to evaluate the polynomials have some structure, simpler methods exist.For example, Knuth[4] section 4.6.4gives a method for tabulating polynomial values of the type

P(x0+h),P(x0+2h),....

Dynamic evaluation

In the case where

x1,...,xm

are not known in advance,Kedlaya and Umans[5] gave a data structure for evaluating polynomials over a finite field of size

Fq

in time

(logn)O(1)(log2q)1+o(1)

per evaluation after some initial preprocessing.This was shown by Larsen[6] to be essentially optimal.

The idea is to transform

P(x)

of degree

n

into a multivariate polynomial

f(x1,x2,...,xm)

, such that

P(x)=f(x,xd,

d2
x

,...,

dm
x

)

and the individual degrees of

f

is at most

d

.Since this is over

\bmodq

, the largest value

f

can take (over

Z

) is

M=dm(q-1)dm

.Using the Chinese remainder theorem, it suffices to evaluate

f

modulo different primes

p1,...,p\ell

with a product at least

M

.Each prime can be taken to be roughly

logM=O(dmlogq)

, and the number of primes needed,

\ell

, is roughly the same.Doing this process recursively, we can get the primes as small as

loglogq

.That means we can compute and store

f

on all the possible values in

T=(loglogq)m

time and space.If we take

d=logq

, we get

m=\tfrac{logn}{loglogq}

, so the time/space requirement is just
loglogq
logloglogq
n

.

Kedlaya and Umans further show how to combine this preprocessing with fast (FFT) multipoint evaluation.This allows optimal algorithms for many important algebraic problems, such as polynomial modular composition.

Specific polynomials

While general polynomials require

\Omega(n)

operations to evaluate, some polynomials can be computed much faster.For example, the polynomial

P(x)=x2+2x+1

can be computed using just one multiplication and one addition since

P(x)=(x+1)2

Evaluation of powers

See main article: Exponentiation by squaring and Addition-chain exponentiation.

A particularly interesting type of polynomial is powers like

xn

.Such polynomials can always be computed in

O(logn)

operations.Suppose, for example, that we need to compute

x16

; we could simply start with

x

and multiply by

x

to get

x2

.We can then multiply that by itself to get

x4

and so on to get

x8

and

x16

in just four multiplications.Other powers like

x5

can similarly be computed efficiently by first computing

x4

by 2 multiplications and then multiplying by

x

.

The most efficient way to compute a given power

xn

is provided by addition-chain exponentiation. However, this requires designing a specific algorithm for each exponent, and the computation needed for designing these algorithms are difficult (NP-complete[7]), so exponentiation by squaring is generally preferred for effective computations.

Polynomial families

Often polynomials show up in a different form than the well known

anxn+...+a1x+a0

.For polynomials in Chebyshev form we can use Clenshaw algorithm.For polynomials in Bézier form we can use De Casteljau's algorithm,and for B-splines there is De Boor's algorithm.

Hard polynomials

The fact that some polynomials can be computed significantly faster than "general polynomials" suggests the question: Can we give an example of a simple polynomial that cannot be computed in time much smaller than its degree?Volker Strassen has shown[8] that the polynomial

n
P(x)=\sum
k=0
kn3
2
2

xk

cannot be evaluated by with less than

\tfrac12n-2

multiplications and

n-4

additions.At least this bound holds if only operations of those types are allowed, giving rise to a so-called "polynomial chain of length

<n2/logn

".

The polynomial given by Strassen has very large coefficients, but by probabilistic methods, one can show there must exist even polynomials with coefficients just 0's and 1's such that the evaluation requires at least

\Omega(n/logn)

multiplications.

For other simple polynomials, the complexity is unknown.The polynomial

(x+1)(x+2)(x+n)

is conjectured to not be computable in time

(logn)c

for any

c

.This is supported by the fact, that if it can be computed fast then integer factorization can be computed in polynomial time, breaking the RSA cryptosystem.[9]

Matrix polynomials

Sometimes the computational cost of scalar multiplications (like

ax

) is less than the computational cost of "non scalar" multiplications (like

x2

).The typical example of this is matrices.If

M

is an

m x m

matrix, a scalar multiplication

aM

takes about

m2

arithmetic operations, while computing

M2

takes about

m3

(or

m2.3

using fast matrix multiplication).

Matrix polynomials are important for example for computing the Matrix Exponential.

Paterson and Stockmeyer [10] showed how to compute a degree

n

polynomial using only

O(\sqrtn)

non scalar multiplications and

O(n)

scalar multiplications.Thus a matrix polynomial of degree can be evaluated in

O(m2.3\sqrt{n}+m2n)

time. If

m=n

this is

O(m3)

, as fast as one matrix multiplication with the standard algorithm.

This method works as follows: For a polynomial

P(M)=an-1Mn-1+...+a1M+a0I,

let be the least integer not smaller than

\sqrt{n}.

The powers

M,M2,...,Mk

are computed with

k

matrix multiplications, and

M2k,M3k,...,

k2-k
M
are then computed by repeated multiplication by

Mk.

Now,

\begin{align}P(M)= &(a0I+a1M+...+ak-1Mk-1) \\+&(akI+ak+1M+...+a2k-1Mk-1

k \\+&... \\+&(a
)M
n-k

I+an-k+1M+...+an-1Mk-1

k2-k
)M

, \end{align}

,where

ai=0

for .This requires just

k

more non-scalar multiplications.

We can write this succinctly using the Kronecker product:

P(M)= \begin{bmatrix}I\\M\\\vdots\\Mk-1

T \left(\begin{bmatrix} a
\end{bmatrix}
0

&a1&a2&...\\ ak&ak+1&\ddots\\ a2k&\ddots\\ \vdots\end{bmatrix}I\right) \begin{bmatrix}I\\Mk\\M2k\\\vdots\end{bmatrix}

.

The direct application of this method uses

2\sqrt{n}

non-scalar multiplications, but combining it with Evaluation with preprocessing, Paterson and Stockmeyer show you can reduce this to

\sqrt{2n}

.

Methods based on matrix polynomial multiplications and additions have been proposed allowing to save nonscalar matrix multiplications with respect to the Paterson-Stockmeyer method.[11]

See also

Notes and References

  1. Carnicer . J. . Gasca . M. . 1990 . Evaluation of Multivariate Polynomials and Their Derivatives . . 54 . 189 . 231–243 . 10.2307/2008692 . 2008692 . free.
  2. Motzkin. T. S.. 1955. Evaluation of polynomials and evaluation of rational functions. Bulletin of the American Mathematical Society. 61. 163. 10.
  3. Book: Von Zur Gathen . Joachim . Modern computer algebra . Jürgen . Gerhard . . 2013 . 9781139856065 . Chapter 10 . Joachim von zur Gathen.
  4. Book: Knuth, Donald . . . 2005 . 9780201853926 . 2: Seminumerical Algorithms . Donald Knuth.
  5. Kedlaya. Kiran S.. Kiran Kedlaya. Umans. Christopher. Chris Umans. 2011. Fast Polynomial Factorization and Modular Composition. SIAM Journal on Computing. 40. 6. 1767–1802. 10.1137/08073408x. 1721.1/71792. 412751 . free.
  6. Book: Larsen, K. G.. 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science . Higher Cell Probe Lower Bounds for Evaluating Polynomials . Kasper Green Larsen. 2012. IEEE. 53. 293–301. 10.1109/FOCS.2012.21. 978-0-7695-4874-6 . 7906483 .
  7. Downey . Peter . Leong . Benton . Sethi . Ravi . Computing Sequences with Addition Chains . SIAM Journal on Computing . 1981 . 10 . 3 . 27 January 2024.
  8. Strassen. Volker. Volker Strassen. 1974. Polynomials with Rational Coefficients Which are Hard to Compute. SIAM Journal on Computing. en. 3. 2. 128–149. 10.1137/0203010.
  9. Chen, Xi, Neeraj Kayal, and Avi Wigderson. Partial derivatives in arithmetic complexity and beyond. Now Publishers Inc, 2011.
  10. Paterson. Michael S.. Mike Paterson. Stockmeyer. Larry J.. Larry J. Stockmeyer. 1973. On the Number of Nonscalar Multiplications Necessary to Evaluate Polynomials. SIAM Journal on Computing. en. 2. 1. 60–66. 10.1137/0202007.
  11. Fasi . Massimiliano . Optimality of the Paterson–Stockmeyer method for evaluating matrix polynomials and rational matrix functions . Linear Algebra and its Applications . 1 August 2019 . 574 . 185 . 10.1016/j.laa.2019.04.001 . 0024-3795. free .