try ai
Popular Science
Edit
Share
Feedback
  • Empirical Interpolation Method

Empirical Interpolation Method

SciencePediaSciencePedia
Key Takeaways
  • The Empirical Interpolation Method (EIM) overcomes the "non-affine barrier" in Reduced-Order Models by constructing an efficient affine approximation for complex, parameter-dependent functions.
  • EIM employs a greedy algorithm to iteratively select basis functions and "magic" interpolation points, which drastically reduces the computational cost during the online evaluation stage.
  • By restoring a fast offline-online decomposition, EIM enables rapid, real-time simulation and analysis across diverse fields, from geosciences to gravitational wave astronomy.
  • Using EIM introduces an approximation error that voids standard error certification, requiring additional steps to compute a rigorous, certified error bound for the combined model.

Introduction

In modern science and engineering, simulating complex physical phenomena often involves solving enormous systems of equations repeatedly for different parameters. This computational burden, known as the "tyranny of the large matrix," can grind progress to a halt. While Reduced-Order Models (ROMs) offer a solution by capturing a system's essence in a much smaller model, they traditionally rely on a special mathematical structure called affine parameter dependence. Many real-world problems, however, lack this structure, creating a significant computational barrier. This article introduces the Empirical Interpolation Method (EIM), a powerful and elegant technique designed specifically to overcome this non-affine challenge. We will first explore the core principles and mechanisms of EIM, detailing how it cleverly constructs an approximation to restore computational efficiency. Following this, we will journey through its diverse applications and interdisciplinary connections, revealing how EIM has become an indispensable tool in fields ranging from geology to the detection of gravitational waves.

Principles and Mechanisms

The Quest for Speed: The Tyranny of the Large Matrix

Imagine trying to understand the intricate dance of air flowing over a wing, the subtle vibrations in a bridge under stress, or the complex spread of heat through a new material. In the modern world, we tackle these monumental challenges not with pen and paper, but with the immense power of computers. We translate the beautiful, continuous laws of physics into discrete, algebraic problems that a machine can solve. This usually means solving a system of equations, which we can write abstractly as A(μ)u=fA(\mu) u = fA(μ)u=f, where uuu is a vector representing our physical state (like temperature or displacement at millions of points), and A(μ)A(\mu)A(μ) is an enormous matrix, perhaps millions by millions in size, that describes the physics of the system. The vector μ\muμ represents the parameters we might want to change—the speed of the airflow, the load on the bridge, the conductivity of the material.

Solving this system just once is a formidable task. But what if we are designing a new aircraft wing and need to test thousands of different shapes and airspeeds? What if we want to find the optimal placement of supports in our bridge? We would need to solve this gargantuan system over and over again for countless different values of μ\muμ. The computational cost would be astronomical, grinding our progress to a halt. We are faced with the tyranny of the large matrix.

To escape this tyranny, we turn to a powerful idea: ​​Reduced-Order Modeling (ROM)​​. The insight is that even though the state vector uuu lives in a space with millions of dimensions, the actual solutions for all the parameters we care about might lie on a much simpler, lower-dimensional surface within that vast space. Think of a symphony orchestra with a hundred musicians. While the possible combinations of sounds are nearly infinite, a particular symphony might be dominated by the interplay of just a few key sections—the strings, the brass, and the woodwinds. A ROM is like trying to capture the essence of the symphony by focusing only on these dominant "modes". We find a small set of "basis" vectors, say rrr of them where rrr is a few dozen instead of millions, that can effectively describe our solution. We then seek an approximate solution that is a combination of just these few basis vectors. This transforms our original N×NN \times NN×N problem into a tiny, manageable r×rr \times rr×r problem.

The key to making this practical is a strict separation of labor, a strategy known as ​​offline-online decomposition​​. In the "offline" stage, we perform all the heavy, time-consuming computations that are independent of the parameter μ\muμ. This is a one-time investment. Then, in the "online" stage, for any new parameter μ\muμ a designer wants to test, the calculation should be lightning-fast, and its cost must be completely independent of the original, massive dimension NNN.

The "Affine" Magic Trick

What grants us this incredible power of offline-online decomposition? The secret lies in a special mathematical structure called ​​affine parameter dependence​​.

Let's use an analogy. Suppose you are selling a custom computer. The total price is the sum of the prices of its components: Price = (CPU price) + (RAM price) + (GPU price). The prices of the individual components might change based on market conditions (our parameter μ\muμ), but the formula is a simple sum. If the price for a specific CPU is a function ΘCPU(μ)\Theta_{\text{CPU}}(\mu)ΘCPU​(μ), you can build a price calculator that looks like: Price(μ)=ΘCPU(μ)⋅(1 CPU unit)+ΘRAM(μ)⋅(1 RAM unit)+…\text{Price}(\mu) = \Theta_{\text{CPU}}(\mu) \cdot (\text{1 CPU unit}) + \Theta_{\text{RAM}}(\mu) \cdot (\text{1 RAM unit}) + \dotsPrice(μ)=ΘCPU​(μ)⋅(1 CPU unit)+ΘRAM​(μ)⋅(1 RAM unit)+… In the world of our matrix A(μ)A(\mu)A(μ), an affine structure means it can be written as a similar sum: A(μ)=∑q=1QΘq(μ)AqA(\mu) = \sum_{q=1}^{Q} \Theta_q(\mu) A_qA(μ)=∑q=1Q​Θq​(μ)Aq​ Here, the Θq(μ)\Theta_q(\mu)Θq​(μ) are simple, scalar-valued functions of the parameter μ\muμ, and the AqA_qAq​ are constant, parameter-independent matrices. This structure is our magic trick. It allows us to perform the Galerkin projection—the mathematical process of reducing the system—on each constant piece AqA_qAq​ just once, in the offline stage. We compute and store the small r×rr \times rr×r matrices Ar,q=VTAqVA_{r,q} = V^T A_q VAr,q​=VTAq​V, where VVV is the matrix containing our reduced basis vectors.

Then, in the online stage, when we are given a new μ\muμ, we simply evaluate the scalar functions Θq(μ)\Theta_q(\mu)Θq​(μ) and assemble the tiny reduced matrix with a quick sum: Ar(μ)=∑q=1QΘq(μ)Ar,qA_r(\mu) = \sum_{q=1}^{Q} \Theta_q(\mu) A_{r,q}Ar​(μ)=∑q=1Q​Θq​(μ)Ar,q​. The cost of this online assembly scales with QQQ and r2r^2r2, completely independent of the original behemoth dimension NNN. We've successfully broken the curse of dimensionality.

It's tempting to think that if the operator has this simple structure, the solution u(μ)u(\mu)u(μ) must also be a simple function of μ\muμ. This is a crucial mistake. The solution is found by inverting the matrix, u(μ)=A(μ)−1f(μ)u(\mu) = A(\mu)^{-1} f(\mu)u(μ)=A(μ)−1f(μ), and matrix inversion is a profoundly nonlinear operation. It scrambles the simple affine structure into a highly complex, rational function of the parameters. The solution manifold {u(μ)}\{u(\mu)\}{u(μ)} is a twisted, curved surface. It is precisely because of this complexity that we need reduced-order models in the first place.

When the Magic Fails: The Non-Affine Barrier

Nature, unfortunately, is not always so accommodating. In many real-world problems, the dependence on the parameter μ\muμ is inherently ​​non-affine​​. Imagine the thermal conductivity of a material changing as a nonlinear function of temperature, or the shape of an object deforming in a complex way. The matrix A(μ)A(\mu)A(μ) can no longer be broken down into a neat sum.

To use our computer analogy again, what if the price was a complicated, non-separable formula like Price(μ)=log⁡(CPU price(μ)+GPU price(μ)2)\text{Price}(\mu) = \log(\text{CPU price}(\mu) + \text{GPU price}(\mu)^2)Price(μ)=log(CPU price(μ)+GPU price(μ)2)? There is no way to pre-calculate parts of the cost. For every new set of market prices μ\muμ, you have to compute the entire formula from scratch.

This is the ​​non-affine barrier​​ in reduced-order modeling. Without the affine structure, to compute our small reduced matrix Ar(μ)=VTA(μ)VA_r(\mu) = V^T A(\mu) VAr​(μ)=VTA(μ)V, we have no choice but to first assemble the entire, massive N×NN \times NN×N matrix A(μ)A(\mu)A(μ) online, for every new parameter value. This reintroduces the dependence on NNN into the online stage, and our dream of rapid, real-time queries is shattered.

Enter the Empirical Interpolator: A Magician's Apprentice

This is where a truly beautiful idea enters the stage: the ​​Empirical Interpolation Method (EIM)​​. The philosophy of EIM is simple: if nature does not provide an affine structure, we will build one ourselves. EIM is a general and powerful recipe for taking a non-affine function or operator and constructing an excellent affine approximation for it.

The insight is that while the function g(x,μ)g(x, \mu)g(x,μ) might have a complicated-looking formula, the collection of all possible functions we can get by varying μ\muμ might be fundamentally simple. They might all be described as different combinations of a few "fundamental shapes". EIM sets out to discover these shapes.

The approximation EIM builds has exactly the affine form we desire: g(x,μ)≈∑q=1Mθq(μ)ζq(x)g(x, \mu) \approx \sum_{q=1}^{M} \theta_q(\mu) \zeta_q(x)g(x,μ)≈∑q=1M​θq​(μ)ζq​(x) Here, the {ζq(x)}q=1M\{\zeta_q(x)\}_{q=1}^M{ζq​(x)}q=1M​ form a basis of "fundamental shapes" that depend only on space, and the {θq(μ)}q=1M\{\theta_q(\mu)\}_{q=1}^M{θq​(μ)}q=1M​ are the parameter-dependent coefficients. This constructed affine form is our key to restoring the offline-online decomposition.

The true genius of EIM lies in how it constructs this approximation. It uses a ​​greedy algorithm​​. Think of trying to replicate a complex painting using a limited palette of colors. You wouldn't start by mixing random colors. A sensible approach would be:

  1. Look at the painting and find the most dominant color. Add that color to your palette.
  2. Now, look at the difference between the original painting and your one-color approximation. This "residual image" contains everything you've missed.
  3. Find the most dominant color in this residual image and add it to your palette.
  4. Repeat this process, at each step capturing the most significant part of the remaining error.

EIM does precisely this, but for functions. It starts with a zero approximation and finds the parameter μ1\mu_1μ1​ and point x1x_1x1​ where the function g(x,μ)g(x, \mu)g(x,μ) is largest. This function becomes the first basis function ζ1(x)\zeta_1(x)ζ1​(x). It then finds the parameter μ2\mu_2μ2​ and point x2x_2x2​ where the residual error is largest. This residual function becomes the second basis function ζ2(x)\zeta_2(x)ζ2​(x), and so on. These "magic points" {xq}\{x_q\}{xq​} become the interpolation points.

This greedy selection of both basis functions and interpolation points has a wonderful consequence. The method is constructed so that to find the coefficients θq(μ)\theta_q(\mu)θq​(μ) online, we only need to evaluate the full, expensive function g(x,μ)g(x, \mu)g(x,μ) at the MMM pre-selected magic points. This leads to a small M×MM \times MM×M system of equations for the coefficients that is, by construction, lower-triangular with ones on the diagonal. It can be solved almost instantaneously with forward substitution. It's an algorithm that is not only powerful but also remarkably elegant and efficient.

EIM vs. DEIM: Functions vs. Vectors, and the Curse of the Mesh

The EIM philosophy can be applied in two different flavors, a distinction that has important practical consequences.

  • ​​Empirical Interpolation Method (EIM)​​ is the original, continuous version. It works at the function level, approximating a function g(x,μ)g(x, \mu)g(x,μ) defined over the physical domain Ω\OmegaΩ. It selects interpolation points in physical space.

  • ​​Discrete Empirical Interpolation Method (DEIM)​​ is its algebraic cousin. It works at the vector level, after the problem has been discretized. Instead of approximating a function, it approximates the huge NNN-dimensional vector that arises from the nonlinear term, for instance, the internal force vector in a simulation. It does this by selecting and evaluating only mmm out of the NNN components of this vector and then reconstructing the rest.

This seemingly subtle difference can be critical when we consider what happens as we refine our computational mesh to get more accurate solutions. As we make the mesh finer (decreasing the mesh size hhh), the dimension NNN of our discrete vectors grows, often very quickly.

EIM, because it operates on the underlying continuous function, is largely insensitive to this. Its approximation error is a statement about functions, not about the mesh. So long as our numerical integration is accurate, the quality of the EIM approximation is ​​mesh-robust​​; it does not degrade as we refine the mesh.

DEIM, on the other hand, is approximating a vector whose length is growing. A basis of mmm vectors that does a great job of approximating vectors of length 10,000 might do a poor job for vectors of length 1,000,000. The fixed number of samples becomes increasingly sparse in the growing vector. As a result, the accuracy of a DEIM approximation with a fixed basis size mmm can degrade as the mesh is refined. This "curse of the mesh" is a key consideration when choosing between the two methods.

The Price of Speed: Rigor and Certification

We have seen how EIM and its variants can overcome the non-affine barrier, enabling incredible computational speed-ups. But this speed comes at a price. The EIM expansion is an approximation. By using it, we are no longer solving the exact reduced version of our original problem; we are solving a reduced version of an approximated problem.

In many fields, particularly in engineering design and certification, having just a fast answer is not enough. We need a guarantee. We need to know how far our approximate solution might be from the true, unknown solution. This guarantee comes in the form of a ​​rigorous a posteriori error bound​​, a mathematically proven certificate that states "the true error is guaranteed to be smaller than this computable number, Δ\DeltaΔ".

When we use an exact affine ROM, we can often compute such a certificate efficiently in the online stage. However, the moment we introduce the EIM or DEIM approximation to handle non-affine or nonlinear terms, this certificate is voided. The calculated error bound becomes merely a heuristic "indicator," not a guarantee.

To restore rigor, we must be honest about the approximation we've made. The total error now has two sources: the error from the reduced basis projection, and the new error from the EIM/DEIM approximation itself. A truly certified method must bound both. The final, rigorous error bound will look something like this: Total Error≤ΔROM+ΔEIM\text{Total Error} \le \Delta_{\text{ROM}} + \Delta_{\text{EIM}}Total Error≤ΔROM​+ΔEIM​ The term ΔROM\Delta_{\text{ROM}}ΔROM​ is related to the residual of the approximated system and can be computed quickly online. The term ΔEIM\Delta_{\text{EIM}}ΔEIM​ is an offline-computed bound on the error introduced by the EIM approximation itself. We pay for the online speed of EIM with the offline effort of calculating this extra bound and the online cost of adding it in. This reflects a deep and beautiful trade-off in computational science: the constant interplay between efficiency, accuracy, and certainty. The Empirical Interpolation Method is not just a clever algorithm; it is a powerful tool that, when used with care, allows us to navigate this trade-off, pushing the boundaries of what we can simulate, understand, and design.

Applications and Interdisciplinary Connections

Having journeyed through the clever mechanics of the Empirical Interpolation Method (EIM), you might be wondering, "What is this elegant trick good for?" It's a fair question. A beautiful piece of mathematics is one thing, but a tool that reshapes how we explore the world is another. As it turns out, EIM is very much the latter. It is a master key that unlocks computational bottlenecks across a breathtaking range of scientific and engineering disciplines. Let's take a tour and see where this key fits. Our journey will take us from the ground beneath our feet to the distant, violent cosmos, revealing a surprising unity in how we tackle complexity.

The Unseen World Beneath Our Feet

Let's start with something solid—or, rather, something porous. Imagine you are a geologist trying to predict how groundwater will flow through an aquifer, or an engineer planning to extract oil from a deep reservoir. The crucial property governing this flow is the permeability of the rock, a measure of how easily fluid can pass through it. This permeability, which we can call kkk, is not a simple constant. It varies dramatically from point to point in a pattern as complex and unique as a fingerprint.

If we want to simulate this flow, we face a problem. The governing equation, Darcy's Law, depends directly on this complicated function k(x)k(x)k(x). What if we don't know the permeability field exactly? What if we want to explore thousands of possible scenarios to assess risks or optimize a strategy? The permeability might depend on several geological parameters μ\boldsymbol{\mu}μ in a highly nonlinear way, perhaps as an exponential function of some underlying random fields. Running a full, high-resolution simulation for every single possible permeability field would take a prohibitive amount of computer time.

Here, EIM comes to the rescue. By treating the permeability field k(x;μ)k(x; \boldsymbol{\mu})k(x;μ) as our non-affine function, we can use EIM to find a small set of "magic points" in space. In the online stage, for any new parameter vector μ\boldsymbol{\mu}μ, we only need to evaluate the permeability at these few magic points. EIM then gives us the exact recipe to combine a set of pre-computed "building-block" solutions to construct an excellent approximation of the full flow field. This transforms an impossibly large problem—exploring a vast parameter space—into a manageable one. We can ask "what if?" thousands of times and get answers in seconds, not days, providing a powerful tool for geoscientists and engineers to understand the complex, hidden world underground.

Engineering the Future: From Antennas to Airplanes

The same principle that helps us peer into the earth also helps us design the technologies of tomorrow. Consider the world of electromagnetics—designing antennas, microwave cavities, or radar systems. The behavior of these devices is governed by Maxwell's equations. Often, the properties of the materials involved, like the electrical permittivity εr\varepsilon_rεr​, depend on operational parameters such as temperature or frequency in a complicated, non-affine way.

An engineer designing a new antenna might want to see how its performance changes as it heats up. A brute-force approach would require re-running a massive simulation for every single temperature value. Again, this is slow and expensive. But by applying EIM to the parameter-dependent permittivity εr(x,μ)\varepsilon_r(x, \mu)εr​(x,μ), the engineer can perform the same magic trick. A few key "magic points" are identified within the device. For any new temperature μ\muμ, one only needs to know the permittivity at these points. EIM then provides the coefficients to linearly combine pre-computed, parameter-independent matrices, assembling the full system matrix on the fly with astonishing speed. This allows for rapid design iteration, optimization, and sensitivity analysis that would be otherwise unthinkable.

This idea of breaking down a complex operator extends beautifully to the notoriously difficult realm of fluid dynamics. Simulating the flow of air over a wing or water through a pipe involves solving the Navier-Stokes equations, a set of coupled, nonlinear partial differential equations. The nonlinearity, in particular, is a major source of computational cost. But what if we could build reduced models for different parts of the physics separately?

Imagine a coupled system, like the vorticity-streamfunction formulation of fluid flow, where the evolution of vorticity ω\omegaω is coupled to a streamfunction ψ\psiψ through a Poisson equation, −Δψ=ω-\Delta \psi = \omega−Δψ=ω. Both the nonlinear advection term and the source term for the Poisson equation can be targeted with EIM. We can build one set of basis functions and magic points for the advection, and another set for the Poisson source term. The crucial insight is that these two reduced models don't live in isolation. For the whole simulation to be stable and consistent, we need to know how to translate the reduced information from one part to the other. The mathematics of EIM allows us to derive a "coupling matrix" that does just this, providing a stable, pre-computable map from the reduced representation of the source term to the reduced representation of the streamfunction needed by the advection term. This showcases EIM not just as a tool for a single equation, but as a key component in a modular, "divide and conquer" strategy for tackling complex, multi-physics simulations.

At the Frontiers of Simulation: A Sharper, Smarter Tool

As we push the boundaries of computational science, we invent ever more sophisticated numerical methods, such as Discontinuous Galerkin (DG) or spectral methods. These methods have their own intricate internal structures, and a "one-size-fits-all" application of EIM might not work. In fact, it can be a recipe for disaster. This is where the true versatility and depth of the EIM philosophy shine.

In a DG method, for example, calculations aren't just done inside elements, but also on the "faces" between them. The flux of quantities across these faces is a critical part of the simulation. If this flux depends on a parameter non-affinely, we can apply EIM not to the whole domain, but specifically to the collection of points on all the faces where the flux is calculated. EIM is flexible enough to target the precise part of the calculation that is causing the bottleneck.

But there's a deeper subtlety. Advanced numerical methods often rely on specific choices of points (quadrature points) and weights to compute integrals, and the stability of the entire simulation can depend on these choices. If we apply a standard EIM (or its discrete cousin, DEIM) without respecting this structure, we might inadvertently break the mathematical properties that keep the simulation from blowing up. The solution? We don't discard EIM; we make it smarter. We can formulate a weighted version of DEIM that incorporates the quadrature weights of the underlying simulation. This "quadrature-aware" approach ensures that the hyper-reduction respects the delicate numerical balance of the full model, preserving its stability and accuracy. This is a beautiful example of how a powerful idea like EIM co-evolves with other advanced methods.

Furthermore, EIM isn't an all-or-nothing proposition. What happens when a fluid develops a shock wave, or a material fractures? In these situations, the solution changes so abruptly that a basis built from smooth snapshots may struggle to provide a good approximation. Does this mean we must abandon the speed of EIM? Not at all. We can design a hybrid or adaptive algorithm. We can use the EIM approximation itself to estimate its own error, on the fly, for each part of our simulation domain. If the estimated error in a region is small, we happily use the fast EIM surrogate. If the error grows too large—a sign that something interesting and sharp is happening—the algorithm can automatically switch back to the more expensive, but more robust, full calculation in that region. This creates a "best of both worlds" simulation that is fast where the solution is simple and accurate where it is complex.

Embracing Uncertainty

So far, we've mostly considered parameters that are unknown but fixed constants. But what if the properties of our system are truly random? In many real-world problems, from materials science to climate modeling, we must contend with inherent uncertainty. The material properties of a manufactured component, for instance, may not be a single value but a random field that varies from sample to sample.

Modeling this requires tools from uncertainty quantification, like the Stochastic Galerkin Method. This powerful method can get bogged down if the random coefficient (say, a thermal conductivity or elastic modulus) depends on the underlying random variables in a non-affine way. A typical example is a lognormal random field, where the coefficient is the exponential of a Gaussian random field. The exponential function completely destroys the mathematical structure needed for an efficient Stochastic Galerkin solver.

Once again, EIM provides the fix. By applying EIM to the non-affine random coefficient, we can generate a high-fidelity surrogate that is affine. This restored structure, often in the form of a sum of Kronecker products, makes the Stochastic Galerkin method computationally feasible again. EIM thus acts as a bridge, allowing the powerful machinery of model order reduction to connect with the powerful machinery of uncertainty quantification, enabling us to perform complex simulations that realistically account for the randomness of the world.

Listening to the Cosmos

Our final stop is perhaps the most spectacular. In the last decade, humanity opened a new window onto the universe: gravitational waves. The detection of these faint ripples in spacetime, created by cataclysmic events like the collision of two black holes, is one of the great triumphs of modern science. At the heart of this discovery lies a computational challenge of astronomical proportions, and EIM is a key part of the solution.

To find a gravitational wave signal buried in the noisy data from detectors like LIGO and Virgo, scientists use a technique called matched filtering. This requires a "template bank"—a vast library of all possible waveforms that a binary black hole merger could produce. The problem is, generating these waveforms requires solving Einstein's equations of General Relativity numerically, a task so computationally demanding that a single simulation can take weeks or months on a supercomputer. Building a template bank of millions of waveforms this way is simply impossible.

This is where surrogate models, powered by reduced basis methods and EIM, have revolutionized the field. The process is a masterpiece of computational science: First, a "training set" is painstakingly created by running a few hundred or thousand high-fidelity numerical relativity simulations for carefully chosen parameters (like the masses and spins of the black holes). Then, a greedy algorithm builds a compact reduced basis from this training set. At each step, it asks: "Of all the training waveforms, which one is worst-represented by my current basis?" The error is measured in a way that is physically meaningful for detection—the loss in signal-to-noise ratio. The "worst-offender" is used to enrich the basis. Because the raw complex waveform is highly oscillatory, it's far more efficient to build separate models for its slowly varying amplitude and its monotonically increasing phase.

But this basis is not enough. We need a way to find the coefficients for any new set of parameters quickly. This is EIM's starring role. For both the amplitude and the phase, EIM identifies a small set of "magic" time points. To reconstruct a waveform for a new black hole binary, one no longer needs to solve Einstein's equations. Instead, one only needs to evaluate a simple analytical formula for the amplitude and phase at these few magic time points, solve a tiny linear system for the coefficients, and voilà—an incredibly accurate waveform is generated in milliseconds. The accuracy of this interpolation is so good that the error converges exponentially as more basis functions and interpolation points are added.

Without this EIM-powered surrogate modeling, the real-time detection and analysis of gravitational-wave events would be impossible. It is the computational engine that allows scientists to instantly match a faint chirp from the cosmos to the violent dance of two black holes millions of light-years away.

From the hidden flow of water in the earth, to the design of our electronics, to the fundamental uncertainties of nature, and finally to the echoes of cosmic collisions, the Empirical Interpolation Method proves itself to be far more than a niche numerical trick. It is a profound and versatile principle for taming computational complexity, allowing us to build faithful, fast, and explorable models of an intricate universe. It teaches us that even in the most complex systems, there are often a few key questions to which the answers tell you almost everything you need to know. The genius of EIM is that it tells us how to find those questions.