try ai
Popular Science
Edit
Share
Feedback
  • Function Approximation

Function Approximation

SciencePediaSciencePedia
Key Takeaways
  • Function approximation replaces complex or unknown functions with simpler ones, like polynomials or sinusoids, built from a set of basis functions.
  • The quality of an approximation is measured using "norms," such as the L2L^2L2 norm for average error and the L∞L^\inftyL∞ norm for worst-case error.
  • The Weierstrass Approximation Theorem guarantees that any continuous function on a closed interval can be approximated to any degree of accuracy by a polynomial.
  • This principle is a cornerstone of modern science and engineering, enabling applications in signal processing (Fourier series), structural analysis (FEM), and quantum chemistry (DFT).

Introduction

In science and mathematics, we often face functions that are impossibly complex or known only through a set of data points. The challenge of taming this complexity—of capturing the essence of a complicated relationship with a simpler, more manageable form—is the central problem addressed by the art and science of function approximation. This powerful concept allows us to create a "stand-in" function that is close enough to the real thing for practical purposes, but how do we build these proxies, and how can we be confident in their fidelity?

This article provides a comprehensive overview of this fundamental topic. In the first chapter, "Principles and Mechanisms," we will delve into the core theory, exploring how approximations are built from basis functions, how their error is measured, and the profound theorems that guarantee their power. We then move on to witness these concepts in action in the second chapter, "Applications and Interdisciplinary Connections," where we will see how function approximation serves as a foundational tool across a vast range of fields, from engineering and signal processing to quantum chemistry and pure mathematics.

Principles and Mechanisms

In our journey to understand the world, we are often faced with functions of bewildering complexity. The trajectory of a spacecraft, the fluctuations of the stock market, the pressure wave of a supernova—these are not always described by simple, neat formulas. Often, we don't even have a formula, just a collection of measurements. How can we tame this complexity? How can we capture the essence of a complicated function with something simpler that we can actually work with? The answer lies in the beautiful and powerful art of ​​function approximation​​. It is the science of creating a "stand-in" or an "impostor" function that is close enough to the real thing for our purposes. But what does "close enough" mean, and how do we build these impostors?

The Art of Deception: What is Approximation?

Let's start with the simplest possible idea. Imagine a smooth, curving line, say the graph of f(x)=xf(x) = \sqrt{x}f(x)=x​ from x=0x=0x=0 to x=1x=1x=1. How could you describe this curve to someone who can only draw flat, horizontal line segments? You might chop the interval from 0 to 1 into a few pieces, and on each piece, you'd draw a horizontal line that somehow represents the curve in that region.

For instance, you could slice the domain into four equal parts: [0,1/4][0, 1/4][0,1/4], [1/4,1/2][1/4, 1/2][1/4,1/2], [1/2,3/4][1/2, 3/4][1/2,3/4], and [3/4,1][3/4, 1][3/4,1]. On the first slice, (0,1/4)(0, 1/4)(0,1/4), you could use the height of the curve at the left endpoint, f(0)=0f(0)=0f(0)=0. On the second, (1/4,1/2)(1/4, 1/2)(1/4,1/2), you could use the height f(1/4)=1/2f(1/4) = 1/2f(1/4)=1/2, and so on. What you have built is a ​​step function​​, a crude staircase that mimics the original curve.

If you were to calculate the area under this staircase, you would be doing nothing other than calculating a ​​Riemann sum​​, the very concept we use to define the definite integral in introductory calculus! This reveals a deep connection: the idea of integration is intrinsically linked to the idea of approximating a function with a simpler one. Of course, our four-step staircase is a poor imitation. But you can imagine that if we used a thousand, or a million, tiny steps, our approximation would become virtually indistinguishable from the true curve.

Building A Stand-In: The Role of Basis Functions

Using little horizontal steps is a fine starting point, but it's a bit like trying to write a novel using only one word. We can create much richer and more efficient approximations by expanding our vocabulary of simple functions. The key idea is to build our approximation as a ​​linear combination​​ of a pre-selected set of ​​basis functions​​.

Think of it like mixing colors. You start with a few primary colors (your basis functions), and by mixing them in different amounts (the coefficients), you can create an enormous spectrum of new colors (the approximating functions).

A common choice for a basis is the set of polynomials, with basis functions {1,x,x2,x3,… }\{1, x, x^2, x^3, \dots\}{1,x,x2,x3,…}. An approximation might look like g(x)=c0+c1x+c2x2g(x) = c_0 + c_1 x + c_2 x^2g(x)=c0​+c1​x+c2​x2. In signal processing, one might try to model a decaying signal using a basis of decaying exponentials, like {exp⁡(−kt),exp⁡(−2kt),exp⁡(−3kt)}\{\exp(-kt), \exp(-2kt), \exp(-3kt)\}{exp(−kt),exp(−2kt),exp(−3kt)}. The approximating function would then have the general form g(t)=c1exp⁡(−kt)+c2exp⁡(−2kt)+c3exp⁡(−3kt)g(t) = c_1\exp(-kt)+c_2\exp(-2kt)+c_3\exp(-3kt)g(t)=c1​exp(−kt)+c2​exp(−2kt)+c3​exp(−3kt). The magic lies in finding the right coefficients cic_ici​ to make our combination g(t)g(t)g(t) look as much like the target function f(t)f(t)f(t) as possible. The most famous example, of course, is the Fourier series, which uses sines and cosines as its basis to represent periodic functions, as if they were complex musical notes built from pure tones.

Two Ways of Seeing: Slicing the Domain vs. Slicing the Range

When we built our first staircase approximation, we did something very natural: we chopped up the domain (the x-axis) into small intervals. This is the heart of the Riemann integral. But there is another, profoundly different, way to think about a function, which leads to the more modern theory of Lebesgue integration.

Instead of slicing the domain, what if we slice the range (the y-axis)?

Let's take the simple parabola f(x)=x2f(x)=x^2f(x)=x2 on the interval [0,1][0,1][0,1]. The Riemann approach (Method A in the problem) is to split the domain [0,1][0,1][0,1] into, say, four vertical strips of equal width and approximate the function in each strip with a constant value, forming four rectangles. The Lebesgue approach (Method B) is entirely different. It asks: where is the function's value between 0 and 0.2? Where is it between 0.2 and 0.4? And so on. We partition the range of values, and for each horizontal "value-band," we find the set of all xxx's that produce a value in that band. The approximation is then built by taking a value from each band and applying it to the corresponding set of xxx's.

Think of it this way: to calculate the total worth of the money in a cash register, the Riemann method would be to pick out each coin and bill one by one, note its value, and add it to the total. The Lebesgue method would be to first make separate piles for all the pennies, all the nickels, all the dimes, etc., and then count the number of coins in each pile and multiply by the pile's value. For simple functions, both methods give the same answer for the integral. But for very wild, "spiky" functions, the Lebesgue approach of grouping by value proves to be far more powerful and robust. This philosophical shift from "slicing the domain" to "slicing the range" is a cornerstone of modern analysis.

The Measure of Success: How Good is 'Good Enough'?

So we have a target function f(x)f(x)f(x) and an approximation g(x)g(x)g(x). How do we score the quality of our imitation? We look at the ​​error function​​, e(x)=f(x)−g(x)e(x) = f(x) - g(x)e(x)=f(x)−g(x), and ask: how "big" is this function? There isn't one single answer; it depends on what we care about.

  • ​​The L2L^2L2 Norm (Mean-Square Error):​​ Perhaps the most common measure is the average of the squared error, E=1b−a∫ab[f(x)−g(x)]2dxE = \frac{1}{b-a} \int_{a}^{b} [f(x) - g(x)]^2 dxE=b−a1​∫ab​[f(x)−g(x)]2dx. The squaring does two things: it makes all errors positive, and it penalizes large errors much more than small ones. Minimizing this error is often mathematically convenient. For example, if we want to approximate the simple function f(x)=xf(x)=xf(x)=x on [−π,π][-\pi, \pi][−π,π] by a single constant ccc, the L2L^2L2-optimal choice for ccc turns out to be the average value of the function over the interval, which is 0.

This idea becomes truly powerful when we move beyond constants. Finding the best polynomial approximation in the L2L^2L2 sense is equivalent to a geometric projection. Imagine our complicated function f(x)f(x)f(x) as a vector in an infinite-dimensional space. The set of all, say, linear polynomials p(x)=a+bxp(x) = a+bxp(x)=a+bx forms a flat plane in this space. The best approximation is simply the "shadow" that f(x)f(x)f(x) casts onto this plane. The mathematical tool for finding this shadow is the ​​inner product​​, a generalization of the dot product for functions, which lets us talk about angles and orthogonality in function space.

  • ​​The L∞L^\inftyL∞ Norm (Uniform Error):​​ Sometimes, we don't care about the average error. We care about the worst-case scenario. An engineer building a bridge wants to guarantee that the stress on a beam never exceeds a certain value. In this case, we want to minimize the maximum absolute error: E=sup⁡x∣f(x)−g(x)∣E = \sup_x |f(x)-g(x)|E=supx​∣f(x)−g(x)∣.

Finding the best approximation in this "uniform" norm is a different game. Consider approximating the function f(x)=∣x∣f(x)=|x|f(x)=∣x∣ on [−1,1][-1,1][−1,1] with an even quadratic polynomial p(x)=ax2+bp(x)=ax^2+bp(x)=ax2+b. The challenge is the sharp corner at x=0x=0x=0. The solution is not intuitive, but it is beautiful. The best polynomial, it turns out, is p(x)=x2+1/8p(x) = x^2 + 1/8p(x)=x2+1/8. If you plot the error function, ∣x∣−(x2+1/8)|x| - (x^2+1/8)∣x∣−(x2+1/8), you'll find it has a remarkable property: it hits its maximum error (+1/8)(+1/8)(+1/8) at x=±1/2x=\pm 1/2x=±1/2, and its minimum error (−1/8)(-1/8)(−1/8) at three points: x=−1x=-1x=−1, x=0x=0x=0, and x=1x=1x=1. The error oscillates back and forth, touching the maximum deviation value multiple times with alternating signs. This is a general feature, described by the ​​Chebyshev Equioscillation Theorem​​. It's as if to get the tightest possible fit, the error function must be stretched taut, waving perfectly between its upper and lower bounds.

The Ultimate Guarantee: The Weierstrass and Stone-Weierstrass Theorems

This raises a grand question: if we are allowed to use a polynomial of high enough degree, can we approximate any continuous function to any desired level of accuracy?

The astonishing answer is yes. This is the content of the ​​Weierstrass Approximation Theorem​​, a jewel of mathematical analysis. It guarantees that for any continuous function f(x)f(x)f(x) on a closed interval, and for any tolerance ϵ>0\epsilon > 0ϵ>0, no matter how tiny, there exists a polynomial p(x)p(x)p(x) such that ∣f(x)−p(x)∣ϵ|f(x) - p(x)| \epsilon∣f(x)−p(x)∣ϵ for all xxx in the interval. It's a license for success! It tells us that polynomials are a truly universal toolkit for approximation.

But what's so special about polynomials? The more general ​​Stone-Weierstrass Theorem​​ provides the profound answer. It lays out the conditions a set of basis functions must satisfy to have this universal approximation property. Roughly speaking, the set of functions must form an ​​algebra​​ (meaning that if you multiply two functions in your toolkit, the result can also be approximated by your toolkit) and it must ​​separate points​​ (meaning for any two different points, there's a function in your toolkit that takes different values at them). For example, we can use this theorem to show that the set of odd polynomials (functions of the form ∑ckx2k+1\sum c_k x^{2k+1}∑ck​x2k+1) is dense in the space of all continuous odd functions on [−1,1][-1,1][−1,1]. Our toolkit is perfectly suited to the class of functions we wish to approximate.

However, these theorems come with fine print. The properties of the approximating space are critical. If we try to approximate f(x)=xf(x)=xf(x)=x on [0,1][0,1][0,1] using rational functions R(x)R(x)R(x) that are constrained to have the same value at the endpoints, R(0)=R(1)R(0)=R(1)R(0)=R(1), we run into a brick wall. No matter how complicated we make our rational function, the error can never be smaller than 1/21/21/2. The simple constraint on the approximants imposes a fundamental limit on their power.

When Approximations Go Wrong: Singularities and The Specter of Divergence

The world of approximation is not all sunshine and guarantees. The fine print in the theorems matters. The Weierstrass theorem, for instance, applies to continuous functions on a closed and bounded interval. What happens if the function misbehaves?

Consider f(x)=1/xf(x) = 1/\sqrt{x}f(x)=1/x​ on the interval (0,1](0, 1](0,1]. This function is perfectly fine everywhere except at x=0x=0x=0, where it shoots off to infinity. Although the area under its curve is finite, the function itself is unbounded. If we try to approximate it with a function g(x)g(x)g(x) that is continuous on the closed interval [0,1][0,1][0,1], our approximation g(x)g(x)g(x) must have some finite value at x=0x=0x=0. This immediately creates a massive, unbridgeable gap between f(x)f(x)f(x) and g(x)g(x)g(x) near zero, a gap that we can't eliminate no matter how clever our choice of ggg is. The singularity fundamentally foils our attempts at a good uniform fit.

An even more subtle and profound "failure" occurs even for perfectly nice, continuous functions. The Weierstrass theorem guarantees that a good polynomial approximation exists, but it doesn't tell us how to find it. What about seemingly "obvious" constructive methods?

  1. ​​Fourier Series:​​ Break down a periodic function into its constituent sines and cosines.
  2. ​​Trigonometric Interpolation:​​ Find the unique trigonometric polynomial that passes exactly through a set of 2N+12N+12N+1 equally spaced points on the function's graph.

Both methods seem like surefire ways to generate better and better approximations as we increase the number of terms or points. The shock is that for both methods, this is not true! It was a major discovery in mathematics that there exist perfectly continuous, well-behaved functions for which the Fourier series diverges at a point. Likewise, there are continuous functions for which the sequence of interpolating polynomials at equally spaced nodes, instead of converging, flies off to infinity at points between the nodes.

The reason is revealed by a powerful idea called the ​​Uniform Boundedness Principle​​. Think of your approximation process (calculating the Nth partial sum, or the Nth interpolant) as an amplifier. You feed in the function fff, and it outputs a number, the value of the approximation. The "gain" of this amplifier is its operator norm, a number called the ​​Lebesgue constant​​. It turns out that for both Fourier series and interpolation at uniform nodes, this gain, while growing very slowly (like the natural logarithm, ln⁡(N)\ln(N)ln(N)), nonetheless grows without bound as N→∞N \to \inftyN→∞. And the principle states that if you have a sequence of amplifiers whose gain is unbounded, there must be some input signal (a continuous function) that will cause the output to blow up.

This is a beautiful, sobering lesson. It tells us that what appears to be the most natural path is not always the safest. It shows that the infinite is a tricky place, and our intuition can sometimes lead us astray. The quest for the perfect approximation is not just a practical tool; it is a deep journey into the structure of functions, the nature of infinity, and the subtle interplay between the continuous and the discrete.

Applications and Interdisciplinary Connections

After our journey through the principles and mechanisms of function approximation, you might be left with a feeling similar to having learned the rules of chess. You understand how the pieces move, the definitions of check and mate, but the real soul of the game—the strategy, the beauty, the surprising power of a well-placed pawn—is yet to be revealed. So it is with function approximation. Its abstract rules come alive only when we see them in action. In this chapter, we will embark on a tour across the vast landscape of science and engineering to witness how this single, elegant idea becomes a universal tool for discovery, creation, and understanding. We will see that replacing the complex with the simple is not just a convenience; it is one of the most profound and powerful strategies we have for making sense of the world.

The Engineered World: From Signals to Structures

Let's begin with the world we build around us. Much of modern technology, from the device you are reading this on to the aircraft that fly overhead, would be impossible without function approximation. It is the invisible scaffolding of the digital age.

Consider the simple act of listening to music or talking on the phone. The air pressure variations that make up a sound wave form an incredibly complex function of time. How does a device clean up static or unwanted noise? It does so by approximating the signal. One of the most beautiful ways to do this is through the lens of Jean-Baptiste Joseph Fourier, who taught us that any reasonably well-behaved periodic function can be represented as a sum of simple sine and cosine waves of different frequencies. The original, complicated signal is just a superposition of these pure tones. The approximation comes when we decide to use only a finite number of these tones—say, the first few hundred. In doing so, we are not just simplifying; we are performing an act of profound physical meaning. The low-frequency sine waves capture the slow, melodic parts of a sound, while the high-frequency waves represent the sharp, abrupt noises and hiss. By truncating the Fourier series, we are effectively throwing away the high-frequency components. This is precisely what a ​​low-pass filter​​ does. Approximating a function with its first NNN Fourier terms is not just a mathematical exercise; it is filtering, a fundamental operation in all of signal processing, from audio engineering to image compression.

This idea of breaking things down into simpler components scales up from one-dimensional signals to three-dimensional objects. How does an engineer predict whether a bridge will stand or an airplane wing will fail under stress? The laws of elasticity that govern the behavior of materials are expressed as partial differential equations—equations that are notoriously difficult to solve for complex shapes. The ​​Finite Element Method (FEM)​​ is the engineer's brilliant answer, and it is function approximation through and through. The core idea is to break the complex object (the bridge, the wing) into a huge number of tiny, simple shapes, or "elements"—like triangles or tetrahedra. Within each tiny element, we approximate the continuous, complicated solution (the displacement or stress field) with a very simple function, typically a low-degree polynomial. For example, the stretching of a simple beam might be approximated by assuming the displacement varies linearly from one end of the element to the other.

But here we find a crucial lesson: the choice of the approximating function is not arbitrary. It must be constrained by the physics it aims to describe. For instance, the linear approximation for a beam element works because it is capable of exactly representing a state of constant strain, which is a fundamental physical possibility. A more complicated cubic or quadratic function might seem "better," but if it fails this basic physical fidelity check (known as the "patch test"), it is useless and will lead to incorrect results. The art of FEM is the art of choosing the right simple functions that, when stitched together, can faithfully capture the complex reality.

Once we commit to a computational approach like FEM, new questions of efficiency arise. Suppose we have a fixed budget—a certain number of nodes or elements we can afford to use. Where should we place them? If a function is mostly flat but has one region of very sharp change (a steep gradient), it seems wasteful to sprinkle our approximation points evenly. Common sense suggests we should concentrate our efforts where the action is. This is the principle of ​​adaptive approximation​​. By placing more nodes in regions of high variation, we can achieve a much more accurate approximation for the same computational cost. This is analogous to a painter using fine, detailed brushstrokes for a person's face while using broad, sweeping strokes for the sky behind them.

This theme of "smart" computation extends to the process of solving the equations themselves. Often, a numerical simulation is an iterative process: we start with an initial guess for the solution and progressively refine it until it converges. Here again, approximation plays a starring role. If we start with a very naive initial guess (like "zero everywhere"), the iterative solver might take millions of steps. But if we can first cook up a simple, back-of-the-envelope analytic function that approximates the true solution reasonably well, we can use that as our starting point. This "informed" initial guess is already in the right ballpark, and the numerical solver will converge dramatically faster. It's the difference between starting a treasure hunt from a random location versus starting from a map that's roughly correct.

Modeling Nature: From the Quantum to the Cosmos

Function approximation is not just for building things; it is a primary tool for understanding the natural world itself. In fundamental science, we are often faced with theories whose exact equations are impossibly complex. Approximation is our only path forward.

There is perhaps no better example than in quantum chemistry. The properties of every molecule and material around us are determined by the behavior of their electrons, governed by the Schrödinger equation. Yet, solving this equation exactly is impossible for anything more complex than a hydrogen atom. ​​Density Functional Theory (DFT)​​ was a Nobel Prize-winning breakthrough that reformulated the problem: instead of the fantastically complex many-electron wavefunction, everything could, in principle, be determined from the much simpler electron density, ρ(r)\rho(\mathbf{r})ρ(r). The catch? A key part of the theory, the exchange-correlation functional Exc[ρ]E_{xc}[\rho]Exc​[ρ], which accounts for the messy quantum interactions, is unknown. The entire enterprise of modern computational chemistry rests on finding good approximations for this "functional of all functionals."

This has led to a beautiful hierarchy of approximations, sometimes called "Jacob's Ladder." The simplest approximation, the Local Density Approximation (LDA), assumes the energy at a point r\mathbf{r}r depends only on the density at that point, ρ(r)\rho(\mathbf{r})ρ(r). To improve upon this, one climbs the ladder to the Generalized Gradient Approximation (GGA), which includes not just the density but also its local gradient, ∇ρ(r)\nabla\rho(\mathbf{r})∇ρ(r). This simple addition—making the approximation aware of how fast the density is changing nearby—dramatically improves accuracy and opened the door to reliable prediction of chemical properties. Higher rungs on the ladder include even more information, like the Laplacian of the density or the kinetic energy density. This is a perfect illustration of approximation as a systematic path towards greater truth.

Of course, when we perform such a calculation, we are usually making several approximations at once: the choice of the functional (like GGA), the use of a finite mathematical basis to represent the electron orbitals, and the use of a finite grid for numerical integration. A mature scientific understanding requires us to be honest about our errors. Rigorous methods exist to disentangle these different sources of error, allowing computational scientists to put error bars on their predictions and systematically identify which part of their approximation needs the most improvement. This is approximation with a conscience.

Zooming out from the molecular scale to the cosmic, we find the same story. Astronomers studying the evolution of binary stars need to know the size of a star's "Roche lobe"—its gravitational zone of influence. There is no simple, exact formula for this teardrop-shaped region. However, a physicist named Peter Eggleton proposed a remarkably clever and simple analytic function that approximates the numerically calculated Roche lobe radius with high accuracy. He designed his function to have the correct mathematical behavior in the limiting cases of very small and very large mass ratios, and it fits the complex reality beautifully in between. This is function approximation as an act of creative modeling—distilling a complex numerical reality into a single, elegant, and immensely useful formula that can be used by other scientists.

The reach of these modeling ideas extends even into the biological and social sciences. How do complex cultural traits, like building a canoe or preparing a special food, persist in a population over many generations? We can build a mathematical model of this process. Such a model is, by its very nature, an approximation of a complex social reality. For instance, we might approximate the probability of a single person successfully learning a trait of complexity kkk as qkq^kqk, where qqq is the fidelity of learning one component. We can then build upon this with further approximations for the success of a group and the persistence over multiple generations. While vastly simplified, these models—these towers of approximation—allow us to ask "what if" questions and gain insight into the crucial factors (like population size or the fidelity of teaching) that allow complex culture to survive and flourish.

The Frontiers: Unifying Ideas and Pure Thought

The story of function approximation is still being written, and its newest chapters are among the most exciting, connecting classic principles with cutting-edge tools and even probing the logical foundations of mathematics itself.

Many of the most challenging problems in modern science, from economics to climate modeling, involve functions of not just two or three, but hundreds or even thousands of variables. Here, traditional methods of approximation often fail, succumbing to the "curse of dimensionality." For decades, one of the most powerful tools for fighting this curse was the ​​sparse grid method​​, a clever way of building a high-dimensional approximant without needing an exponential number of points. Today, the world is abuzz with ​​deep neural networks​​, which have shown an uncanny ability to approximate high-dimensional functions. What is fascinating is that we are now discovering deep, underlying connections between these seemingly disparate fields. A neural network with ReLU activation functions can be viewed as a type of adaptive, piecewise linear approximator. Principles that were developed for classical methods, like exploiting additive structure in a function or focusing on the most important "dimensions" (a concept called dimension adaptivity), are finding direct analogues in the design of efficient neural network architectures. This is a beautiful example of the unity of scientific thought: a good idea is a good idea, whether it's expressed in the language of tensor products or in the layers of a neural network.

Finally, let us take one last step, into the realm of pure mathematics. Does approximation have a role to play when we are not trying to model a messy physical reality, but are seeking absolute, logical truth? The answer is a resounding yes. In geometric analysis, mathematicians study the properties of abstract shapes and spaces. A fundamental quantity called the ​​Cheeger constant​​, h(M)h(M)h(M), measures the "bottleneckedness" of a space. It is defined by seeking the worst-possible ratio of a region's surface area to its volume. The definition, in its full generality, requires searching over all possible measurable subsets—a universe of unimaginably complex and "jagged" regions. And yet, a cornerstone theorem of the field, the De Giorgi approximation theorem, proves that we get the exact same value for h(M)h(M)h(M) if we restrict our search to only "nice" regions with smooth boundaries. Why? Because any jagged set can be approximated arbitrarily well by a sequence of smooth ones. This powerful result means that to understand this fundamental geometric invariant, we need only consider simple objects. Approximation here is not a tool of convenience; it is a foundational principle that lends stability and computability to the very definitions we work with.

From the practical hum of a digital filter to the abstract certainty of a geometric theorem, the principle of function approximation is a golden thread. It is the art of the possible, the engine of modern computation, and a deep and unifying language that allows us to reason about a world that is, in its full detail, forever beyond our grasp. It is the triumph of the simple over the complex.