try ai
Popular Science
Edit
Share
Feedback
  • Error Formula for Polynomial Interpolation

Error Formula for Polynomial Interpolation

SciencePediaSciencePedia
Key Takeaways
  • The error in polynomial interpolation is determined by three factors: the function's intrinsic "wiggliness" (its higher-order derivative), the number of interpolation points, and the geometric arrangement of these points.
  • Using a high-degree polynomial with equally spaced points can lead to large, oscillating errors near the interval's endpoints, a problem known as the Runge phenomenon.
  • The Runge phenomenon can be overcome by strategically placing interpolation points at Chebyshev nodes, which are more densely clustered near the endpoints to minimize the maximum error.
  • The interpolation error formula is a fundamental tool for analyzing the accuracy of various numerical algorithms, including Newton-Cotes integration rules and Backward Differentiation Formulas for ODEs.

Introduction

Polynomial interpolation is a fundamental mathematical technique for approximating complex functions by constructing a simpler polynomial that passes through a set of known points. While this method provides an elegant way to "connect the dots," a crucial question remains: how accurate is the resulting approximation, and how can we quantify its error? Without a measure of error, our approximation is merely a guess, lacking the mathematical rigor required for scientific and engineering applications. This article addresses this knowledge gap by providing a deep dive into the polynomial interpolation error formula, a powerful tool that offers precise insights into the accuracy of our approximations.

Across the following chapters, we will embark on a journey to understand this pivotal formula. First, under "Principles and Mechanisms," we will dissect the formula's components, explore its elegant derivation using Rolle's Theorem, and uncover the critical lessons it teaches about choosing interpolation points, including the pitfalls of the Runge phenomenon and its solution via Chebyshev nodes. Following that, the "Applications and Interdisciplinary Connections" chapter will demonstrate the formula's vast utility, showing how it serves as the hidden engine behind numerical methods in finance, physics, and computer science, and ultimately defines the very limits of what we can accurately model.

Principles and Mechanisms

Imagine you want to describe a winding country road to a friend who has never seen it. You can't give them the exact path of every curve and dip, but you can give them a few key landmarks: "It passes by the old oak tree, then the red barn, and then crosses the stone bridge." Your friend can then connect these dots with straight lines or gentle curves to get a pretty good idea of the road's path. Polynomial interpolation is the mathematical version of this. We take a few known points on a function's graph and draw a smooth polynomial curve through them to approximate the whole function.

But how good is this approximation? Where is it best, and where is it worst? The beauty of mathematics is that we don't have to guess. We have a precise formula for the error, a kind of truth serum for our approximation. And like all great formulas in physics and mathematics, it's not just a collection of symbols; it’s a story.

The Anatomy of an Error

The error, which we'll call E(x)E(x)E(x), is simply the true value of the function, f(x)f(x)f(x), minus the value of our polynomial approximation, Pn(x)P_n(x)Pn​(x). For a polynomial of degree nnn that interpolates a function at n+1n+1n+1 distinct points (x0,x1,…,xnx_0, x_1, \ldots, x_nx0​,x1​,…,xn​), the error at any point xxx is given by a magnificent formula:

E(x)=f(x)−Pn(x)=f(n+1)(ξx)(n+1)!∏i=0n(x−xi)E(x) = f(x) - P_n(x) = \frac{f^{(n+1)}(\xi_x)}{(n+1)!} \prod_{i=0}^{n} (x-x_i)E(x)=f(x)−Pn​(x)=(n+1)!f(n+1)(ξx​)​i=0∏n​(x−xi​)

Let's not be intimidated by this. We can dissect it like a beautiful machine to understand its parts. It has three main components, each telling a different part of the story.

  1. ​​The Function's "Wiggliness" - f(n+1)(ξx)f^{(n+1)}(\xi_x)f(n+1)(ξx​)​​: This term involves the (n+1)(n+1)(n+1)-th derivative of our original function fff. The derivative measures the rate of change. A higher-order derivative measures the rate of change of the rate of change, and so on. You can think of it as a measure of the function's intrinsic "wiggliness" or complexity. A function that is almost a straight line has small derivatives, while a function that oscillates wildly has large derivatives. The mysterious ξx\xi_xξx​ (pronounced "ksee") is some point that lies between our interpolation nodes and the point xxx. We don't know its exact location, but its existence is guaranteed. This part of the formula tells us that it's easier to approximate "smooth," well-behaved functions than it is to approximate "jumpy," complicated ones.

  2. ​​The "Tent Pole" Placement - ω(x)=∏i=0n(x−xi)\omega(x) = \prod_{i=0}^{n} (x-x_i)ω(x)=∏i=0n​(x−xi​)​​: This is the product of the distances from our point xxx to each of the interpolation nodes xix_ixi​. Imagine the nodes are poles holding up a tent sheet (our polynomial). This term, often called the ​​nodal polynomial​​, tells us how the error behaves based on the geometry of our chosen points. Right away, we can see something wonderful. If we choose our point xxx to be one of the interpolation nodes, say xjx_jxj​, then one of the terms in the product will be (xj−xj)=0(x_j - x_j) = 0(xj​−xj​)=0, making the entire product—and thus the entire error—zero! This is the algebraic guarantee that our polynomial approximation is perfectly accurate at the points we used to create it. It's a fundamental sanity check that our formula passes with flying colors.

  3. ​​The Scaling Factor - 1(n+1)!\frac{1}{(n+1)!}(n+1)!1​​​: The factorial in the denominator, (n+1)!(n+1)!(n+1)!, grows incredibly fast (2!,3!,4!,…2!, 3!, 4!, \ldots2!,3!,4!,… are 2,6,24,…2, 6, 24, \ldots2,6,24,…). This term suggests that, all else being equal, using more points (a higher nnn) should make the error fantastically small. It's a powerful force pushing the error towards zero.

These three parts are in a constant tug-of-war. The function's wiggliness might be large, but we might counteract it with clever node placement or by simply using more points to make the factorial term huge. The entire game of interpolation is to manage this balance.

A Peek Under the Hood: The Magic of Rolle's Theorem

Where does this elegant formula come from? It's not pulled from a hat. It arises from a clever argument that is a beautiful example of mathematical reasoning. Let's sketch the idea for a moment, as it's too lovely to ignore.

Imagine we want to understand the error at some specific point xxx. We define a helper function, let's call it g(t)g(t)g(t), like this:

g(t)=f(t)−Pn(t)−C⋅∏i=0n(t−xi)g(t) = f(t) - P_n(t) - C \cdot \prod_{i=0}^{n} (t-x_i)g(t)=f(t)−Pn​(t)−C⋅i=0∏n​(t−xi​)

Here, CCC is a constant we get to choose. Notice that f(t)−Pn(t)f(t) - P_n(t)f(t)−Pn​(t) is just the error E(t)E(t)E(t), and the product is our nodal polynomial ω(t)\omega(t)ω(t). By construction, g(t)g(t)g(t) is zero at all the nodes xix_ixi​, because both f(xi)−Pn(xi)f(x_i) - P_n(x_i)f(xi​)−Pn​(xi​) and ω(xi)\omega(x_i)ω(xi​) are zero. Now for the trick: we choose the constant CCC to make g(t)g(t)g(t) also equal to zero at our special point of interest, xxx. We now have a function g(t)g(t)g(t) that has at least n+2n+2n+2 roots!

Here comes the hero of the story: ​​Rolle's Theorem​​. It states that if a smooth function has the same value at two points, its derivative must be zero somewhere between them. Since g(t)g(t)g(t) has n+2n+2n+2 roots, we can apply Rolle's Theorem repeatedly. Its derivative, g′(t)g'(t)g′(t), must have at least n+1n+1n+1 roots. The second derivative, g′′(t)g''(t)g′′(t), must have at least nnn roots, and so on. After applying it n+1n+1n+1 times, we find that the (n+1)(n+1)(n+1)-th derivative, g(n+1)(t)g^{(n+1)}(t)g(n+1)(t), must be zero at some point, let's call it ξx\xi_xξx​.

Now let's take the (n+1)(n+1)(n+1)-th derivative of our definition of g(t)g(t)g(t). The polynomial Pn(t)P_n(t)Pn​(t) is of degree nnn, so its (n+1)(n+1)(n+1)-th derivative is zero. The nodal polynomial ω(t)\omega(t)ω(t) is a polynomial of degree n+1n+1n+1 with a leading term of tn+1t^{n+1}tn+1, so its (n+1)(n+1)(n+1)-th derivative is just the constant (n+1)!(n+1)!(n+1)!. What we're left with is:

g(n+1)(t)=f(n+1)(t)−0−C⋅(n+1)!g^{(n+1)}(t) = f^{(n+1)}(t) - 0 - C \cdot (n+1)!g(n+1)(t)=f(n+1)(t)−0−C⋅(n+1)!

But we know that this must be zero at t=ξxt = \xi_xt=ξx​. So, 0=f(n+1)(ξx)−C⋅(n+1)!0 = f^{(n+1)}(\xi_x) - C \cdot (n+1)!0=f(n+1)(ξx​)−C⋅(n+1)!. A quick rearrangement gives us the value of our magic constant CCC:

C=f(n+1)(ξx)(n+1)!C = \frac{f^{(n+1)}(\xi_x)}{(n+1)!}C=(n+1)!f(n+1)(ξx​)​

Remember how we defined CCC in the first place? We chose it so that g(x)=0g(x) = 0g(x)=0, which means f(x)−Pn(x)=C⋅ω(x)f(x) - P_n(x) = C \cdot \omega(x)f(x)−Pn​(x)=C⋅ω(x). Substituting our expression for CCC, we arrive precisely at the error formula. It's a beautiful chain of logic, starting with a clever construction and ending with a powerful result.

The Shape of Error

The formula is not just for finding numbers; it gives us a qualitative feel for the error's behavior.

Consider approximating a function with a simple straight line connecting two points (a,f(a))(a, f(a))(a,f(a)) and (b,f(b))(b, f(b))(b,f(b)). This is linear interpolation (n=1n=1n=1). The error formula becomes:

E(x)=f′′(ξx)2(x−a)(x−b)E(x) = \frac{f''(\xi_x)}{2} (x-a)(x-b)E(x)=2f′′(ξx​)​(x−a)(x−b)

For any point xxx between aaa and bbb, the term (x−a)(x-a)(x−a) is positive and (x−b)(x-b)(x−b) is negative, so their product is always negative. This means the sign of the error is determined entirely by the sign of the second derivative, f′′f''f′′. The second derivative tells us about the function's ​​concavity​​.

Let's take the function f(x)=xf(x) = \sqrt{x}f(x)=x​. Its second derivative is f′′(x)=−14x−3/2f''(x) = -\frac{1}{4}x^{-3/2}f′′(x)=−41​x−3/2, which is always negative for positive xxx. Therefore, the error E(x)=(negative)2(negative)E(x) = \frac{(\text{negative})}{2}(\text{negative})E(x)=2(negative)​(negative) is always positive. This means f(x)−P1(x)>0f(x) - P_1(x) > 0f(x)−P1​(x)>0, or f(x)>P1(x)f(x) > P_1(x)f(x)>P1​(x). The true function is always above its linear approximation. This makes perfect geometric sense: the square root function is concave down, so any chord connecting two points on its graph will lie beneath the curve itself. The error formula captures this intuitive geometric fact perfectly.

What if we try to interpolate a polynomial with another polynomial of lower degree? For instance, what if we interpolate the cubic function f(x)=x3f(x) = x^3f(x)=x3 with a quadratic polynomial P2(x)P_2(x)P2​(x)? The error formula involves the third derivative, f(3)(x)f^{(3)}(x)f(3)(x). But the third derivative of x3x^3x3 is just the constant 6. The mysterious ξx\xi_xξx​ disappears from the picture, because f(3)(ξx)f^{(3)}(\xi_x)f(3)(ξx​) is 6 no matter what ξx\xi_xξx​ is! The error is no longer an estimate; it's an exact expression:

E(x)=63!∏i=02(x−xi)=∏i=02(x−xi)E(x) = \frac{6}{3!} \prod_{i=0}^{2} (x-x_i) = \prod_{i=0}^{2} (x-x_i)E(x)=3!6​i=0∏2​(x−xi​)=i=0∏2​(x−xi​)

If we choose the nodes symmetrically at −a,0,a-a, 0, a−a,0,a, the error becomes E(x)=(x+a)(x)(x−a)=x3−a2xE(x) = (x+a)(x)(x-a) = x^3 - a^2xE(x)=(x+a)(x)(x−a)=x3−a2x. We have found the exact error polynomial without even needing to find the interpolating polynomial P2(x)P_2(x)P2​(x) first! This demonstrates the sheer power of the error formula as an analytical tool.

The Perils of High Degrees: The Runge Phenomenon

The 1(n+1)!\frac{1}{(n+1)!}(n+1)!1​ term in our formula seems to promise that as we add more and more points, our approximation will get better and better. For many well-behaved functions, this is true. But a startling discovery by Carl Runge in 1901 showed that this intuition can be catastrophically wrong.

The problem lies in the nodal polynomial, ω(x)=∏(x−xi)\omega(x) = \prod(x-x_i)ω(x)=∏(x−xi​). If you choose your interpolation nodes to be equally spaced across an interval, say from -1 to 1, this polynomial develops a nasty habit. While it stays relatively small in the middle of the interval, it grows to enormous values near the endpoints.

Imagine taking 11 equally spaced points on [−1,1][-1, 1][−1,1]. The value of ∣ω11(x)∣|\omega_{11}(x)|∣ω11​(x)∣ at a point halfway between the two central nodes is tiny. But at a point halfway between the last two nodes near the endpoint, the value explodes. A direct calculation shows the value of ∣ω(x)∣|\omega(x)|∣ω(x)∣ can be over 60 times larger near the endpoint than near the center! This means that even if the function's derivative term is well-behaved, the error will be amplified enormously at the edges of the interval. Adding more equally spaced points just makes these "edge-wiggles" worse. This is the infamous ​​Runge phenomenon​​. It's a crucial lesson: blindly adding more data points with a simple-minded strategy can make your approximation diverge wildly from the truth.

Taming the Beast: The Magic of Chebyshev Nodes

If equal spacing is the villain, who is the hero? The problem boils down to this: how can we choose the locations of our nodes xix_ixi​ on an interval to make the maximum value of ∣ω(x)∣|\omega(x)|∣ω(x)∣ as small as possible?

The answer is one of the most elegant results in approximation theory, and it involves a special family of polynomials called ​​Chebyshev polynomials​​. The optimal nodes to choose are not equally spaced, but are instead the roots of a Chebyshev polynomial. These nodes are bunched up more densely near the endpoints of the interval and are more spread out in the middle.

This specific, non-uniform spacing is precisely what's needed to tame the wild growth of ω(x)\omega(x)ω(x). By placing more "guard" nodes near the edges, we prevent the polynomial from soaring upwards. The difference is not trivial. If we try to approximate f(x)=x4f(x)=x^4f(x)=x4 with a cubic polynomial, choosing the four Chebyshev nodes results in a maximum error that is nearly 40% smaller than the error from a set of seemingly reasonable, equally spaced points. This isn't just a minor improvement; it's a fundamental shift in strategy that turns a potentially unstable method into a powerful and reliable one.

A Final Flourish: Beyond Just Points

The framework we've built is surprisingly flexible. What if, at our data points, we know not only the function's value but also its slope (its first derivative)? This is called ​​Hermite interpolation​​. Our powerful error formula adapts with grace. If we specify the function and its derivative at a point aaa, it's like having two nodes infinitesimally close to each other. In the nodal polynomial, this translates to a repeated factor. For Hermite interpolation at points aaa and bbb, the nodal polynomial simply becomes ω(x)=(x−a)2(x−b)2\omega(x) = (x-a)^2(x-b)^2ω(x)=(x−a)2(x−b)2. The rest of the formula's structure remains the same. This shows that the core concept—the interplay between the function's smoothness and the geometry of the nodes—is a deep and unifying principle.

From its elegant derivation to its practical power in predicting and controlling approximation errors, the polynomial interpolation error formula is a cornerstone of numerical science. It teaches us that approximation is not guesswork but a delicate art, guided by profound mathematical principles. It warns us of hidden dangers like the Runge phenomenon but also hands us the tools, like Chebyshev nodes, to overcome them. It is a perfect example of how a single mathematical expression can provide a window into the beautiful and complex relationship between the continuous and the discrete.

Applications and Interdisciplinary Connections

We have spent some time looking under the hood, examining the gears and levers of the polynomial interpolation error formula. It is a beautiful piece of mathematical machinery. But a machine is only as good as what it can do. Now, we are going to take this machine out for a spin. We will see that this is no mere theoretical curiosity; it is a master key that unlocks a deeper understanding of problems across finance, engineering, physics, and computer science. It is a tool for prediction, a blueprint for algorithms, and a sober guide to the very limits of what we can know.

The Art of Prediction and Estimation

At its heart, interpolation is about making an educated guess. We have some data points, and we want to know what happens in between. The error formula is our guide to how "educated" that guess really is.

Imagine you are a financial analyst trying to price a bond. You know the yield for a 5-year bond and a 10-year bond, but your client is interested in a 7-year bond. The simplest thing to do is to draw a straight line between the two known points and read off the value at year 7. This is linear interpolation. But how much faith should you have in this number? Is it a solid estimate or a wild guess? The error formula for a degree-1 polynomial gives us the answer. If we can make a reasonable assumption about the maximum curvature of the true yield curve—how much it bends—the formula provides a rigorous upper bound on our error. It transforms our guess into a statement of confidence: "the 7-year yield is approximately this, and I can guarantee it's no further than that from the true value". This ability to quantify uncertainty is the bedrock of modern finance and risk management.

This idea of filling in the gaps is not just for numbers on a spreadsheet; it is how your computer can perform magic on images. Consider a digital photograph with a thin scratch across it—a single line of missing pixels. How can we "inpaint" this damage? For each column of the image, we can treat the pixels above and below the scratch as data points. We then fit a polynomial through these points and use it to calculate the color of the missing pixel. If the original image in that region was smooth (say, a picture of a clear blue sky), the underlying function is simple, much like a low-degree polynomial. In that happy case, our interpolation can perfectly restore the missing information. The error formula tells us that if the "true" function is a polynomial of degree NNN, and we use at least N+1N+1N+1 points to interpolate it, our error is zero. The magic is revealed to be mathematics.

The Hidden Engine of Numerical Methods

The true power of polynomial interpolation, however, goes far beyond just connecting known dots. It is the secret, humming engine inside many of the most powerful algorithms in computational science. The error formula, in turn, becomes the key to analyzing their performance.

Let’s first look at numerical integration. Suppose you want to find the area under a curve, ∫abf(x)dx\int_a^b f(x) dx∫ab​f(x)dx. Many classic methods, like the trapezoidal rule or Simpson's rule, belong to a family called Newton-Cotes formulas. The secret of these methods is surprisingly simple: they approximate f(x)f(x)f(x) with an interpolating polynomial pn(x)p_n(x)pn​(x) using evenly spaced points, and then they integrate the polynomial exactly. So, what is the error of the integration? By the linearity of integrals, the error is simply the integral of the interpolation error: ∫abf(x)dx−∫abpn(x)dx=∫ab(f(x)−pn(x))dx\int_a^b f(x) dx - \int_a^b p_n(x) dx = \int_a^b (f(x) - p_n(x)) dx∫ab​f(x)dx−∫ab​pn​(x)dx=∫ab​(f(x)−pn​(x))dx This is a remarkable insight! The error of our quadrature method is directly and exactly the total area of the interpolation error. Understanding one completely illuminates the other.

The story continues with solving differential equations, the language of physics. Many numerical methods, such as the famous Backward Differentiation Formulas (BDF), are needed to trace the evolution of a system over time, say y′(t)=f(t,y(t))y'(t) = f(t, y(t))y′(t)=f(t,y(t)). To do this, they approximate the derivative y′(tn+1)y'(t_{n+1})y′(tn+1​) at the next time step. How? They first find a polynomial that passes through the previously computed solution points (tn,yn),(tn−1,yn−1),…(t_n, y_n), (t_{n-1}, y_{n-1}), \dots(tn​,yn​),(tn−1​,yn−1​),…, and then they compute the derivative of that polynomial. The error of the method—the so-called local truncation error—is then found by differentiating the polynomial interpolation error formula. Once again, the behavior of a sophisticated algorithm is governed by the properties of the underlying interpolation.

Even the search for solutions to equations like f(x)=0f(x)=0f(x)=0 relies on this principle. The secant method, a fast and popular root-finding algorithm, generates its next guess by finding the root of a straight line interpolating the two previous points. Where does its famous speed come from? By applying the error formula for a degree-1 polynomial to the function f(x)f(x)f(x) near its root, we can derive the exact relationship between successive errors, revealing its superlinear convergence rate.

Taming the Beast: The Runge Phenomenon

With great power comes great danger. If we can approximate a function by simply adding more and more interpolation points, why not use a polynomial of degree 100? It seems intuitive that more points should lead to a better fit. But here, our intuition leads us astray into one of the most famous pitfalls of numerical analysis: the Runge phenomenon.

Imagine a rover navigating on a distant planet. It takes elevation samples at evenly spaced intervals and interpolates them with a high-degree polynomial to model the terrain ahead. If the true terrain contains a smooth hill, like the function H(x)=11+25x2H(x) = \frac{1}{1+25x^2}H(x)=1+25x21​, the rover's interpolated model might develop wild oscillations near the ends of the sampling interval. These are not real features; they are mathematical artifacts. The rover's computer might "see" a phantom ravine or a phantom obstacle that isn't there, leading to a catastrophic decision. The polynomial, trying too hard to fit the data in the middle, goes crazy at the edges.

The error formula, f(n+1)(ξ)(n+1)!∏(x−xi)\frac{f^{(n+1)}(\xi)}{(n+1)!} \prod (x-x_i)(n+1)!f(n+1)(ξ)​∏(x−xi​), contains the secret to both the problem and its solution. The ∏(x−xi)\prod (x-x_i)∏(x−xi​) term gets very large near the ends of the interval for equispaced points. The solution? Choose the interpolation points xix_ixi​ cleverly to make this product term as small as possible across the entire interval. This leads us to the heroic Chebyshev nodes.

Instead of being evenly spaced, Chebyshev nodes are bunched up near the endpoints of the interval. Think of trying to secure a bedsheet on a windy day. If you only put pins in the middle, the edges will flap wildly. To keep it flat, you need to put more pins near the edges. Chebyshev nodes do exactly this for our polynomial. They "pin down" the function at the edges, suppressing the wild oscillations of the Runge phenomenon.

This taming of the beast is not just an academic trick; it's a cornerstone of modern computational science. Many simulations, from climate modeling to quantum chemistry, are incredibly expensive to run. To explore a model's behavior, scientists can't afford to run it thousands of times. Instead, they run the expensive simulation a handful of times at carefully chosen Chebyshev nodes. They then construct a high-degree interpolating polynomial—a "surrogate model"—which is extremely cheap to evaluate. Because the Chebyshev nodes guarantee the error is minimized across the interval, this surrogate can be used for optimization, uncertainty quantification, and design, revolutionizing the pace of scientific discovery,.

Knowing the Limits: When Interpolation Fails

So, are Chebyshev nodes a magic bullet? Can we approximate any function this way, as long as we use enough points? The answer is a resounding no, and it brings us to a final, profound lesson from the error formula.

The formula contains the term f(n+1)(ξ)f^{(n+1)}(\xi)f(n+1)(ξ), the (n+1)(n+1)(n+1)-th derivative of our function. This term has been lurking quietly in the background, but its message is crucial: polynomial interpolation works best for functions that are infinitely smooth—functions that have derivatives of all orders. What happens if we try to interpolate a function that is not smooth?

Consider the simplest non-smooth function: a step function, which makes a sudden jump from 0 to 1. A polynomial is the epitome of smoothness; it cannot make a sharp turn, let alone a vertical jump. If we try to fit a polynomial to a step function, no matter how high the degree, and no matter what nodes we choose, a beautiful piece of analysis shows that the uniform error can never be smaller than half the height of the jump. It is a fundamental mismatch of character. Trying to model a discontinuity with a polynomial is like trying to build a perfect staircase out of water; the inherent nature of your material makes the task impossible. This tells us that the applicability of our error formula, and of polynomial interpolation itself, is governed by the smoothness of the world we are trying to model.

A Universal Language of Approximation

Our journey is complete. We have seen the error formula for polynomial interpolation at work in an astonishing variety of contexts. It gives us error bars in finance, repairs pixels in our photos, drives the algorithms that integrate functions and solve differential equations, explains the convergence of iterative methods, warns us of phantom mountains on Mars, and provides the foundation for the surrogate models that accelerate science. Finally, it teaches us its own limitations, reminding us of the crucial link between a model and the smoothness of the reality it seeks to describe.

This single formula is a thread that weaves together disparate fields, revealing a deep unity in the art and science of approximation. It is part of a universal language that helps us understand the intricate relationship between data, models, and the irreducible uncertainty that comes from trying to connect the dots.