From charlesreid1

The fact that the polynomial is an approximation does not necessarily detract from its usefulness because all models are approximations. Essentially, all models are wrong, but some are useful.

- Box and Draper, Empirical Model-Building and Response Surfaces


Polynomials are one of the foundational objects in mathematics, and a keystone of algebra. Polynomials tie together many important concepts, and act as a bridge between the concrete numbers and abstract algebra. It's a great teaching tool, because nearly everybody will know what a polynomial is (or can be taught in 5 minutes), and yet you can spend a lifetime digging into their many useful properties and applications.

Polynomial Interpretation of Radix Representations

All numbers are just polynomials!

n = 125

n = 1 \times 100 + 2 \times 10 + 5 \times 1

n = 1 \times 10^2 + 2 \times 10^1 + 5 \times 10^0

n = x^2 + 2 x + 5

Polynomials/Numbers - Using polynomials, we can generalize the concept of a radix, understand how we express numbers, and even think about alternative representations of numbers. This page describes some of those ideas.


Algebraic definition of a polynomial:

A(x) = a_0 + a_1 x_1 + a_2 x_2^2 + \dots + a_{n-1} x^{n-1}


We can represent a polynomial in a couple of ways, each useful in various situations. Keep in mind, the ability or inability to convert between different representations is crucial to algorithm design.

Three ways:

  • Coefficient vector  \langle a_0, a_1, a_2, \dots, a_{n-1} \rangle
  • Roots/sequence of roots \langle r_0, r_1, r_2, \dots, r_{n-1} \rangle
  • Samples (x_k, y_k) \quad \rightarrow \quad A(x_k) = y_k

The Fundamental Theorem of Algebra [1] tells us that any polynomial of degree n has n roots (if we include roots with multiplicity). FTA proven by Gauss; uniqueness of representation proven by Legendre.

The problem with the roots representation is, there is no general method to go from coefficients to roots - we have formulas for special cases (degree 2, 3, and 4) but no formula for degree > 5 (a result of the Abel-Ruffini Theorem [2]).

We will cover, below, the relationship between operations and representations.


There are a couple of polynomial operations that will be useful:

  • Evaluation of polynomials
  • Addition of two polynomials
  • Multiplication of two polynomials

One of the challenges with implementing fast and efficient algorithms for each of these operations is, each operation has an ideal choice for how to represent the polynomial - and it's differnet for each operation.

For example, to add two polynomials in linear time, the coefficient vector representation is the easiest - then addition can be performed by adding corresponding vector elements. But to multiply two polynomials in linear time, the root representation (representing a polynomial using a vector of roots) is the fastest way - then multiplication is as simple as concatenating the roots.


It's easy to come up with an inefficient and slow algorithm for evaluation - evaluate the polynomial exactly as it is written:

A(x) = a_0 + a_1 x_1 + a_2 x_2^2 + \dots

However, while nominally linear, this procedure is very inefficient and redundant.

A better way to evaluate a polynomial given its coefficients is to use Horner's Rule:

A(x) = a_0 + x(a_1 + x(a_2 + x( \dots )))

This is an O(N) algorithm.


To add two polynomials, C(x) = A(x) + B(x), simply add together their coefficients. In this case, the coefficient representation is the best to use, as the corresponding elements of their vectors can be added in linear time for an overall O(N) addition algorithm.


Multiplication is an extremely useful operation - principally because it is equivalent to the convolution operation.

Using the coefficient representation, or the "classic" polynomial representation, this gets rather complicated rather quickly. There is not an easy way to do this because we have lots of interacting terms. (One constant term, but two linear terms; three quadratic terms; four cubic terms; etc.)

General form: the kth term's coefficient will be the sum over all the various combinations of coefficients that sum to k:

c_k = \sum_{j=0}^k a_j b_{k-j}

Total time to do this? Each term takes linear time (this grows like a nested for loop), so the total time is O(N^2). But this is "vanilla" polynomial multiplication.

If we had the roots of each polynomial, we could do this in O(1) time by just concatenating the roots of each polynomial. That's tough to beat - except for the fact that getting from coefficients is impossible if the degree is higher than 4 - which makes this technique more of a math trivia question.

However, with some care, multiplication can be done in O(N \log N) time.

Divide and Conquer/Polynomial Multiplication

Teaching Computer Science with Polynomials

Teaching Inheritance with Polynomials

Notes on teaching concept of inheritance with polynomials: Polynomials/Inheritance

Notes on teaching concept of interface with polynomials: Polynomials/Interface

Polynomials are an interesting test case for inheritance and interfaces:

  • Implementing the general (nth degree polynomials) and specific (quadratic/cubic polynomials)
  • We learn polynomials in reverse order: from specific to general; but we design code from general to specific.
  • Inheritance: parent-child relationship, what the parent has, the child also has, share DNA, can never "escape" that line of heritage
  • Interface: club relationship, just pay the dues and meet the requirements and you're in, can go where you like and be who you please, association is ad-hoc

Polynomial Class Testing

Main page: Polynomials/Test Driven Development

You could potentially teach a whole course on test-driven development, and polynomial class is a good simple problem to illustrate some of the principles and motivations for implementing TDD.

  • Mechanics of how to write a unit test, how to run a unit test
  • Philosophical how, how do you write a unit test?
  • Give some examples of numerical recipes style tests - not just one-and-done

What kind of corner cases do you want to try (get fuzzy):

  • Very big numbers, very small numbers, very weird numbers
  • Hex numbers? Binary numbers? Integers? Nulls?

(Like a very simple fuzzing procedure.)

Polynomial Class Timing

Main page: Polynomials/Timing

Two steps:

  • Keep this simple: stick a timer on the front, stick a timer on the end, subtract the two, and there's your time.
  • Make it a little more fancy: create your own CSV file, so that you can run it once, open it in Excel, and plot the results.

Take-home lesson:

  • Don't let the problem end at the Java code - keep going, use other tools, do more analysis, do more meta-analysis.

Numerical Recipes with Polynomials

Polynomials are great, universally-accessible math objects. They're also commonly used in scientific computing, data analysis and statistics, and in mathematical models, which may need to perform millions or billions of polynomial calculations.

Polynomials/Numerical Recipes - notes from Press et al, Numerical Recipes, and the section on polynomials, rational functions, quadratics, and cubics.


To fit an Nth degree polynomial using N+1 points, can use linear algebra.

Here's how it works: an Nth degree polynomial has N+1 coefficients.

Rather than treating x as the unknown, treat a, b, c, d, etc. as the unknowns.

Evaluate the value of the polynomial at the known N+1 points, which gives you an LHS and an RHS - a linear equation. 85 = 2 a + 4 b + 8 c + d

Now do that with all N+1 points, and you get N+1 equations, for the N+1 unknowns. Solve that matrix equation to get the coefficients of your Nth degree polynomial.