try ai
Popular Science
Edit
Share
Feedback
  • Convergence of Sets: From Abstract Theory to Practical Applications

Convergence of Sets: From Abstract Theory to Practical Applications

SciencePediaSciencePedia
Key Takeaways
  • The convergence of a sequence of sets can be rigorously defined when its limit inferior (permanent residents) equals its limit superior (frequent visitors).
  • Different metrics, like the symmetric difference for size and the Hausdorff distance for shape, define distinct types of set convergence with unique outcomes.
  • In dynamical systems, limit sets describe a system's ultimate long-term behavior, determining whether it settles into an equilibrium, a cycle, or a more complex state.
  • Understanding the properties of a limit, like the electron cusp in quantum chemistry, allows for the creation of vastly more efficient computational approximations.

Introduction

The concept of convergence—a sequence of items getting closer and closer to a limiting destination—is fundamental to mathematics. While we can easily visualize a sequence of numbers approaching zero, the notion becomes far more intriguing and complex when applied to sequences of sets. What does it mean for a series of wiggling curves to converge to a solid rectangle, or for a cloud of points to coalesce into a smooth circle? This question reveals that "getting closer" can be defined in multiple, equally valid ways, each offering a unique perspective on the dynamic world of shapes and forms. This article addresses the challenge of extending our intuition of convergence from simple points to complex sets, providing a conceptual journey into this powerful mathematical idea. First, in "Principles and Mechanisms," we will build a rigorous foundation, exploring the core definitions of set convergence like limit superior and inferior, and contrasting different ways to measure the distance between sets, such as the Hausdorff distance and symmetric difference. Subsequently, in "Applications and Interdisciplinary Connections," we will witness these theories in action, discovering how set convergence provides critical insights into the long-term destiny of dynamical systems and the practical challenges of approximation in quantum chemistry.

Principles and Mechanisms

We all have an intuitive feeling for what it means for things to "converge" or "approach" a limit. The sequence of numbers 1,12,14,18,…1, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \dots1,21​,41​,81​,… clearly marches towards zero. But what could it possibly mean for a sequence of sets—collections of points, shapes, or numbers—to converge? Can a jittery, oscillating curve approach a solid rectangle? Can a collection of disconnected dust motes converge to a perfect circle? The answer, perhaps surprisingly, is a resounding yes. The journey to understanding how is a wonderful adventure in mathematical thinking, revealing that even a concept like "getting closer" can have many beautiful and distinct meanings.

An Intuitive First Step: Monotonic Sequences

Let's begin with the simplest case. Imagine a sequence of sets that are "nested" inside each other. Consider a sequence of intervals AnA_nAn​ on the real number line, defined as An=[−n−1n,n2]A_n = [-n - \frac{1}{n}, n^2]An​=[−n−n1​,n2] for each positive integer nnn. For n=1n=1n=1, we have A1=[−2,1]A_1 = [-2, 1]A1​=[−2,1]. For n=2n=2n=2, we have A2=[−2.5,4]A_2 = [-2.5, 4]A2​=[−2.5,4]. For n=3n=3n=3, we have A3≈[−3.33,9]A_3 \approx [-3.33, 9]A3​≈[−3.33,9]. You can see the pattern: each new set completely contains the one before it, A1⊂A2⊂A3⊂…A_1 \subset A_2 \subset A_3 \subset \dotsA1​⊂A2​⊂A3​⊂…. This is an ​​increasing sequence of sets​​. What is it approaching? As nnn gets larger and larger, the left end goes to −∞-\infty−∞ and the right end goes to +∞+\infty+∞. The sets are swallowing up more and more of the number line. It seems natural to say that the limit is the collection of all points that eventually get included, which is simply the union of all the sets in the sequence: ⋃n=1∞An\bigcup_{n=1}^\infty A_n⋃n=1∞​An​. In this case, the limit is the entire set of real numbers, R\mathbb{R}R.

Now, let's look at the opposite situation. Consider the sequence Bn=(1−1n,3+1n2]B_n = (1 - \frac{1}{n}, 3 + \frac{1}{n^2}]Bn​=(1−n1​,3+n21​]. Here, B1=(0,4]B_1 = (0, 4]B1​=(0,4], B2=(0.5,3.25]B_2 = (0.5, 3.25]B2​=(0.5,3.25], B3≈(0.67,3.11]B_3 \approx (0.67, 3.11]B3​≈(0.67,3.11]. Each set is smaller than the one before it; the left endpoint inches up towards 1, while the right endpoint inches down towards 3. This is a ​​decreasing sequence of sets​​: B1⊃B2⊃B3⊃…B_1 \supset B_2 \supset B_3 \supset \dotsB1​⊃B2​⊃B3​⊃…. What is the limit here? It's not the union—that would just give us the biggest set, B1B_1B1​. Instead, the limit must be the set of points that manage to survive and stay in every single set, no matter how far down the sequence we go. This is the intersection of all the sets: ⋂n=1∞Bn\bigcap_{n=1}^\infty B_n⋂n=1∞​Bn​. A point xxx is in this limit if it's greater than 1−1n1 - \frac{1}{n}1−n1​ for all nnn (which means x≥1x \ge 1x≥1) and less than or equal to 3+1n23 + \frac{1}{n^2}3+n21​ for all nnn (which means x≤3x \le 3x≤3). Thus, the sequence of open-closed intervals converges to the closed interval [1,3][1, 3][1,3].

The Meeting of Two Minds: Limit Superior and Limit Inferior

This idea of unions for increasing sequences and intersections for decreasing ones is elegant, but most sequences of sets are not so well-behaved. They might expand and contract, or shift around in strange ways. To handle the general case, we need two clever new concepts: the ​​limit inferior​​ and the ​​limit superior​​.

Think of a point xxx and a sequence of sets AnA_nAn​. We can ask two questions about the long-term relationship between xxx and the sequence:

  1. ​​The Persistence Question​​: Is the point xxx a member of all sets in the sequence, from a certain point onwards? This is a very strict condition. The set of all points that satisfy this is called the ​​limit inferior​​, denoted lim inf⁡n→∞An\liminf_{n\to\infty} A_nliminfn→∞​An​. You can think of it as the set of "eventual permanent residents." Formally, it's the union of all the tail-end intersections: lim inf⁡An=⋃n=1∞⋂k=n∞Ak\liminf A_n = \bigcup_{n=1}^\infty \bigcap_{k=n}^\infty A_kliminfAn​=⋃n=1∞​⋂k=n∞​Ak​.

  2. ​​The Recurrence Question​​: Does the point xxx appear in the sets infinitely often? Here, xxx can drop out of the sequence for a while, but it must always come back. The set of all such points is the ​​limit superior​​, denoted lim sup⁡n→∞An\limsup_{n\to\infty} A_nlimsupn→∞​An​. This is the set of "frequent visitors." Formally, it's the intersection of all the tail-end unions: lim sup⁡An=⋂n=1∞⋃k=n∞Ak\limsup A_n = \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty A_klimsupAn​=⋂n=1∞​⋃k=n∞​Ak​.

By definition, any point that is eventually in all sets must also be in infinitely many of them. This means that for any sequence, we always have lim inf⁡An⊆lim sup⁡An\liminf A_n \subseteq \limsup A_nliminfAn​⊆limsupAn​. And now we have a beautiful and robust definition of convergence: ​​a sequence of sets AnA_nAn​ converges to a limit set AAA if and only if the set of permanent residents is the same as the set of frequent visitors.​​ That is, lim inf⁡An=lim sup⁡An=A\liminf A_n = \limsup A_n = AliminfAn​=limsupAn​=A.

This definition is powerful. For example, it allows us to prove a rather satisfying result about complements. Using De Morgan's laws, one can show that (lim inf⁡An)c=lim sup⁡(Anc)(\liminf A_n)^c = \limsup (A_n^c)(liminfAn​)c=limsup(Anc​) and (lim sup⁡An)c=lim inf⁡(Anc)(\limsup A_n)^c = \liminf (A_n^c)(limsupAn​)c=liminf(Anc​). What this means is that if a sequence of sets AnA_nAn​ converges to a limit AAA, then the sequence of their complements, AncA_n^cAnc​, also converges, and it converges to precisely the complement of the limit, AcA^cAc. The convergence is preserved under this fundamental set operation.

Measuring the Difference: A Tale of Two Metrics

The lim inf⁡/lim sup⁡\liminf/\limsupliminf/limsup definition is rigorous, but it doesn't give us a number to quantify how close two sets are. In science and mathematics, we love to turn concepts into numbers. Can we define a "distance" between sets? Yes, and there is more than one way to do it, each telling a different story.

1. Convergence in Measure: The Symmetric Difference

One way to think about the difference between two sets, AAA and BBB, is to look at the regions they don't share. This is the set of points in AAA but not BBB, and the points in BBB but not AAA. This combined region is called the ​​symmetric difference​​, AΔB=(A∖B)∪(B∖A)A \Delta B = (A \setminus B) \cup (B \setminus A)AΔB=(A∖B)∪(B∖A).

If our sets have a notion of "size"—like length, area, or volume, which mathematicians call a ​​measure​​, μ\muμ—we can define the distance between them as the size of their symmetric difference:

d(A,B)=μ(AΔB)d(A, B) = \mu(A \Delta B)d(A,B)=μ(AΔB)

A sequence of sets AnA_nAn​ converges to AAA in this sense if the measure of their symmetric difference goes to zero: μ(AnΔA)→0\mu(A_n \Delta A) \to 0μ(An​ΔA)→0. This means the "area of disagreement" between AnA_nAn​ and AAA vanishes in the limit.

This metric can lead to some fascinating outcomes. Consider a sequence of sets An=Cn∪[0,qn]A_n = C_n \cup [0, q_n]An​=Cn​∪[0,qn​] within the interval [0,1][0,1][0,1]. Here, CnC_nCn​ represents the nnn-th stage in the construction of the famous Cantor set (which has a measure of zero), and qnq_nqn​ is the nnn-th partial sum of the series for e−1e^{-1}e−1, which is ∑k=0n(−1)kk!\sum_{k=0}^n \frac{(-1)^k}{k!}∑k=0n​k!(−1)k​. As nnn grows, the CnC_nCn​ part "evaporates" in terms of measure, while the interval part [0,qn][0, q_n][0,qn​] neatly converges to the interval [0,e−1][0, e^{-1}][0,e−1]. Using the symmetric difference metric, the distance between AnA_nAn​ and the simple interval [0,e−1][0, e^{-1}][0,e−1] tends to zero. So, from the perspective of measure, the complicated sequence AnA_nAn​ converges to the simple interval [0,e−1][0, e^{-1}][0,e−1], whose measure is simply its length, e−1e^{-1}e−1. All the intricate structure of the Cantor set construction vanishes under the gaze of this particular metric.

2. Geometric Convergence: The Hausdorff Distance

The measure-based distance is blind to sets of measure zero. The Cantor set, a line, or a collection of points all have zero area in a plane. The symmetric difference would see them all as being the "same size." We need a different tool to capture geometric shape and form. This is the ​​Hausdorff distance​​.

The idea is magnificently intuitive. To find the Hausdorff distance between two sets AAA and BBB, you perform two checks:

  1. For every point aaa in set AAA, find its closest distance to any point in set BBB. Find the point aaa that has to travel the farthest to reach BBB.
  2. Now do the reverse: for every point bbb in set BBB, find the "worst-case" distance it has to travel to reach set AAA.

The Hausdorff distance, dH(A,B)d_H(A, B)dH​(A,B), is the larger of these two worst-case distances. A sequence of sets AnA_nAn​ converges to AAA if this distance goes to zero, meaning that in the limit, every point in AnA_nAn​ is very close to some point in AAA, and every point in AAA is very close to some point in AnA_nAn​.

This geometric perspective yields some of the most visually stunning examples of set convergence:

  • ​​The Weaving Curve:​​ Imagine the graph of fn(x)=cos⁡(nx)f_n(x) = \cos(nx)fn​(x)=cos(nx) on an interval, say [0,2π][0, 2\pi][0,2π]. As nnn increases, the wave oscillates more and more frantically. The sequence of graphs does not converge to the graph of any single function. Instead, these curves get closer and closer to every point in the rectangle [0,2π]×[−1,1][0, 2\pi] \times [-1, 1][0,2π]×[−1,1]. In the Hausdorff metric, this sequence of one-dimensional curves converges to a two-dimensional solid rectangle! The wiggling line, in its infinite frenzy, fills the entire space.

  • ​​Stardust to a Circle:​​ Picture the nnn-th roots of unity, which are nnn points spaced evenly on the unit circle. Now, at each of these nnn points, place a tiny closed disk of radius rn=1/nr_n = 1/nrn​=1/n. This gives us a set SnS_nSn​. For n=3n=3n=3, it's three disks in a triangle. For n=10n=10n=10, it's ten smaller disks in a decagon formation. As n→∞n \to \inftyn→∞, we have an ever-increasing number of ever-smaller disks. This "star-dust" of disks converges, in the Hausdorff metric, to the smooth, continuous unit circle itself. What's more, each set SnS_nSn​ has a positive area, but they converge to the unit circle, a set whose two-dimensional area is zero.

  • ​​Sculpting the Void:​​ The construction of the Cantor set provides another perfect example. We start with C0=[0,1]C_0 = [0,1]C0​=[0,1] and iteratively remove the open middle third of each interval to get C1,C2,…C_1, C_2, \dotsC1​,C2​,…. Each CnC_nCn​ is a finite collection of closed intervals. The Hausdorff distance between the approximation CnC_nCn​ and the final, infinitely dusty Cantor set CCC is exactly 12⋅3n+1\frac{1}{2 \cdot 3^{n+1}}2⋅3n+11​. This formula beautifully quantifies the rate at which our sequence of tangible, blocky shapes converges to its ethereal, fractal limit.

Worlds of Convergence: A Concluding Remark

As our journey shows, the question "Does this sequence of sets converge?" is incomplete. The proper question is, "Converge in what sense?" Convergence based on measure is concerned with size and bulk, while convergence in the Hausdorff metric is concerned with shape and position.

These different worlds do not always agree. Consider a sequence of probability measures (ways of distributing one unit of "mass") on the interval [0,1][0,1][0,1] defined as μn=(1−1n)δ0+1nδ1\mu_n = (1-\frac{1}{n})\delta_0 + \frac{1}{n}\delta_1μn​=(1−n1​)δ0​+n1​δ1​, where δx\delta_xδx​ represents putting all the mass at point xxx. As n→∞n \to \inftyn→∞, almost all the mass ends up at point 0. We say the measures converge weakly to δ0\delta_0δ0​. However, the support of each measure μn\mu_nμn​ (the set where the mass is located) is the two-point set Sn={0,1}S_n = \{0, 1\}Sn​={0,1}. The support of the limit measure is S={0}S = \{0\}S={0}. The Hausdorff distance between {0,1}\{0,1\}{0,1} and {0}\{0\}{0} is always 1, no matter how large nnn is. The supports do not converge geometrically, even though the measures converge in their own way.

Understanding the convergence of sets is not just an abstract exercise. It is fundamental to fields ranging from fractal geometry and dynamical systems, where we study the long-term behavior of evolving systems, to computational analysis and image processing, where we approximate complex shapes with simpler ones. By carefully defining what we mean by "getting closer," we build a powerful and versatile language to describe the dynamic and ever-changing world of shapes and forms.

Applications and Interdisciplinary Connections

Now that we have grappled with the mathematical machinery of converging sets, let us step back and ask the most important question a physicist, or any scientist, can ask: So what? Where does this abstract idea touch the real world? What problems does it solve, and what new ways of thinking does it afford us? You will be delighted to find that the notion of a sequence of points or functions converging to a limiting set is not some esoteric novelty. It is a deep and powerful principle that brings clarity to an astonishing range of fields, from predicting the fate of entire ecosystems to calculating the fundamental properties of matter itself.

We will explore this in two grand arenas. First, we will see how it governs the ultimate destiny of dynamical systems. Then, we will discover how it illuminates the path we must take to approximate the infinite complexity of the quantum world.

The Destiny of Systems: Limit Sets in Dynamics

Imagine a simple ball rolling inside a large bowl. It rolls back and forth, losing a little energy with each swing, until it eventually settles at the very bottom. Where does it end up? Not just near the bottom, but at the bottom. The final state is a single point. Now imagine a planet in a stable orbit around a star. As time goes on, where is the planet? Well, it’s always somewhere on its elliptical path. The set of points it visits over and over again is not a single point, but a closed loop.

These are intuitive examples of limit sets. In the language of dynamical systems, a system’s state is a point in a "state space," and its evolution over time is a trajectory carving a path through this space. The ​​omega-limit set​​, denoted ω\omegaω, is the set of all points that the system comes back to arbitrarily closely, infinitely often, as time marches toward infinity. This set represents the ultimate, long-term behavior of the system. For the ball in the bowl, the ω\omegaω-limit set is a single point (the equilibrium). For the idealized planet, it's a periodic orbit.

A beautiful and profound simplification occurs in a huge class of physical systems known as ​​gradient systems​​. These are systems that possess a special function, often corresponding to energy or a similar quantity, that always decreases along any trajectory. Think of it as a mathematical landscape, or a potential VVV. The system's evolution is always "downhill." The equation of motion is simply x˙=−∇V(x)\dot{\mathbf{x}} = -\nabla V(\mathbf{x})x˙=−∇V(x). What is the consequence of this relentless descent? The system cannot roll downhill forever unless the hill is infinitely deep. If the system is confined to a bounded region, it must eventually approach a place where the landscape is flat—that is, a place where ∇V=0\nabla V = \mathbf{0}∇V=0. These are the equilibrium points.

This simple idea has a startling consequence: for any bounded trajectory in a gradient system, the only possible long-term behaviors are to settle into an equilibrium. There can be no persistent oscillations, no periodic orbits, and certainly no chaos. The potential function, often called a Lyapunov function, acts as a guiding hand, forcing the system's trajectory to converge to a set composed entirely of equilibria.

This principle is not confined to simple mechanical systems. It can be generalized with breathtaking scope. What if our "landscape" isn't a simple surface in our 3D world, but a more abstract, curved space—a manifold? In the luminary field of Morse theory, we see that for a generic potential function on a compact manifold (like a sphere or a torus), the entire, potentially complex flow of the system is governed by a finite number of critical points (peaks, valleys, and saddles). Any trajectory has an alpha-limit set (where it came from at t→−∞t \to -\inftyt→−∞) and an omega-limit set (where it's going as t→∞t \to \inftyt→∞), and both of these sets consist of single critical points. The intricate dance of the dynamics over the entire manifold is reduced to a network of paths connecting a few special points. The topology of the space reveals the destiny of the flow.

Let's bring this powerful idea back from the cosmos of abstract manifolds to the earthy realm of ecology. Consider two species competing for the same limited resources. Their populations, xxx and yyy, evolve according to a set of coupled equations. Can we predict the outcome? Will one species drive the other to extinction? Will they coexist? Or will their populations oscillate in a perpetual cycle of boom and bust? This is a question about the ω\omegaω-limit sets of the ecological system. By analyzing the flow in the (x,y)(x, y)(x,y) population space, we can identify the equilibria—points like "species A wins," "species B wins," or "coexistence." Furthermore, by using clever tools like the Bendixson-Dulac criterion, we can often prove that no periodic orbits can exist in the system. This tells us that the fate of the ecosystem will not be an endless cycle but a convergence to one of the stable equilibria. The abstract concept of a limit set becomes a concrete prediction about life and death.

Sometimes, a system's long-term behavior is more subtle. It might not settle into a single equilibrium or a simple loop. The ​​nonwandering set​​, Ω\OmegaΩ, is a broader concept that captures all points exhibiting any form of recurrence. A point is nonwandering if, for any small neighborhood around it, trajectories starting in that neighborhood eventually return to it at some later time. This set, by its nature, contains all the interesting long-term dynamics. It includes all equilibria and all periodic orbits. The famous Poincaré–Bendixson theorem tells us that for a planar system, if an ω\omegaω-limit set is a part of the nonwandering set and contains no fixed points, it must be a periodic orbit. The nonwandering set is the true stage upon which the final act of any dynamical system plays out.

Approaching Truth: Convergence in the Quantum World

Let's now turn from the dynamics of the visible world to the structure of the invisible one. One of the central challenges in modern science is approximating an infinitely complex reality. In quantum chemistry, we strive to solve the Schrödinger equation to find the exact wavefunction, Ψ\PsiΨ, which contains all possible information about a molecule's electrons. This wavefunction is in an infinitely-dimensional space (a Hilbert space), and we can never write it down perfectly.

So, we approximate. The standard method is to build the wavefunction from a finite set of simpler, known mathematical functions—a ​​basis set​​. Imagine trying to build a complex sculpture using a finite set of lego blocks. The more blocks you have (and the more varied their shapes), the better your approximation will be. In quantum chemistry, our "blocks" are one-electron functions called orbitals, and the quality of our basis set is often described by a number LLL, which roughly corresponds to the complexity of the shapes we are using.

The best possible approximation we can build with a given set of blocks is called the ​​Full Configuration Interaction (FCI)​​ solution for that basis. Our grand strategy is to use larger and larger basis sets, generating a sequence of FCI approximations that, we hope, converges to the one true, exact wavefunction. This is a profound example of "convergence of sets," where our sequence is a series of approximations built from an ever-expanding set of basis functions.

But here, nature throws us a nasty curveball. The exact wavefunction has a peculiar and crucial feature that our simple building blocks struggle to replicate. The electronic Hamiltonian contains the term 1rij\frac{1}{r_{ij}}rij​1​ representing the Coulomb repulsion between any two electrons iii and jjj. As two electrons get very close (rij→0r_{ij} \to 0rij​→0), this repulsion blows up. For the total energy to remain finite, the kinetic energy must produce an equal and opposite infinity to cancel it out. This forces the exact wavefunction to have a very specific, non-smooth shape at the point of electron coalescence. It forms a ​​cusp​​—a sharp point, like the tip of a cone. For two opposite-spin electrons, this is quantified by the Kato cusp condition:

∂Ψ‾∂r12∣r12=0=12Ψ‾(r12=0)\left.\frac{\partial \overline{\Psi}}{\partial r_{12}}\right|_{r_{12}=0} = \frac{1}{2}\overline{\Psi}(r_{12}=0)∂r12​∂Ψ​​r12​=0​=21​Ψ(r12​=0)

This means the wavefunction must be linear in the inter-electron distance r12r_{12}r12​ at very short range.

Our problem is this: the standard basis functions we use (Gaussian orbitals) are completely smooth. They are like round pebbles. Trying to build a sharp, pointy cusp out of smooth, round pebbles is an incredibly inefficient task. You can get closer and closer, but it requires an enormous number of pebbles arranged just so. In the same way, describing the electron-electron cusp with a basis set of smooth orbitals requires an enormous number of functions, particularly those with high angular momentum (d, f, g, h functions and beyond).

The consequence is an agonizingly slow crawl toward the exact answer. The error in the correlation energy—the very energy that holds molecules together—decreases with the size of our basis set LLL only as L−3L^{-3}L−3. This means that to halve the error, we have to do a calculation that is vastly more expensive. For decades, this "basis set convergence problem" was one of the biggest bottlenecks in computational chemistry.

The breakthrough came from fully appreciating the nature of the limit object we were trying to reach. If the problem is building a cusp, why not just add a block that already has a cusp built in? This is the revolutionary idea behind ​​explicitly correlated (F12) methods​​. We augment our basis with a few special functions that explicitly depend on the inter-electron distance, r12r_{12}r12​, in a way that perfectly satisfies the Kato cusp condition.

The result is nothing short of spectacular. By tackling the most difficult feature of the wavefunction head-on, the rest of the approximation becomes much easier. The convergence of the energy with respect to the basis set size is dramatically accelerated. Instead of a painful L−3L^{-3}L−3 crawl, the error now vanishes at a blistering pace, often as L−7L^{-7}L−7 or even faster. It is crucial to understand that we are still converging to the same exact answer; F12 theory does not change the laws of quantum mechanics. It simply provides a vastly more intelligent sequence of basis sets to get there.

This principle also explains why some quantum chemistry methods suffer from slow convergence more than others. A method like MP2, which is part of the "double-hybrid" family of functionals, explicitly constructs the correlated wavefunction using sums over virtual orbitals, and so it directly confronts (and struggles with) the cusp. In contrast, a pure Density Functional Theory (DFT) method models the energy using a functional of the smooth electron density, which is constructed only from the occupied orbitals. The DFT functional implicitly accounts for the cusp, making the method far less sensitive to the basis set inadequacies that plague wavefunction methods.

From the ultimate fate of the stars to the subtle dance of electrons that makes chemistry possible, the concept of convergence to a set is a unifying thread. It gives us a language to talk about destiny and a strategy to approach truth. It shows us that by understanding the fundamental properties of the limit we seek, we can find much cleverer paths to reach it.