try ai
Popular Science
Edit
Share
Feedback
  • Khintchine's theorem

Khintchine's theorem

SciencePediaSciencePedia
Key Takeaways
  • Khintchine's theorem establishes a zero-one law: for a given approximation standard, either almost all real numbers meet it infinitely often, or almost none do.
  • The outcome is determined by the convergence or divergence of the series formed from the approximating function, ∑ψ(q)\sum \psi(q)∑ψ(q).
  • The proof for the divergent case requires the approximating function ψ(q)\psi(q)ψ(q) to be non-increasing to manage the arithmetic dependencies between rational approximations.
  • The theorem connects Diophantine approximation to other fields, defining the Hausdorff dimension of exceptional sets and inspiring generalizations to higher dimensions.

Introduction

The vast majority of numbers on the real line are irrational, meaning they cannot be expressed as simple fractions. This brings forth a fundamental question at the heart of number theory: how well can these irrational numbers be approximated by fractions? This field, known as Diophantine approximation, seeks to quantify the quality of such approximations. The central problem is to understand the "size" of the set of numbers that can be approximated with a given degree of accuracy infinitely often. Is this set large or small, and what determines its measure?

This article delves into Khintchine's theorem, a breathtakingly elegant result that provides a definitive answer. It establishes a sharp "zero-one" dichotomy, a law of nature for the number line. The reader will learn how this theorem acts as a precise switch, whose position is determined by the simple convergence or divergence of an infinite series.

To appreciate this masterpiece, we will first explore its foundational "Principles and Mechanisms," unpacking the roles of measure theory and the Borel-Cantelli lemma in its proof. Following this, we will examine its far-reaching "Applications and Interdisciplinary Connections," discovering how this single idea illuminates the fractal nature of exceptional number sets, extends to higher dimensions, and even deepens our understanding of the foundations of calculus.

Principles and Mechanisms

A Question of Measure: Setting the Stage

Imagine you have all the real numbers lined up before you. An endless, seamless line. Now, pick one. What is the chance that it's a simple fraction, like 12\frac{1}{2}21​ or 227\frac{22}{7}722​? The answer, in a very real sense, is zero. The rational numbers, though infinite, are like isolated specks of dust in the vast continuum of the reals. The overwhelming majority are ​​irrational numbers​​—numbers like π\piπ or 2\sqrt{2}2​ that can't be written as a simple fraction.

This raises a deeper question. If we can't write them as fractions, how well can we approximate them with fractions? This is the heart of ​​Diophantine approximation​​. We want to know, for a given irrational number xxx, how close can we get with a rational p/qp/qp/q? And how does the quality of this approximation depend on the size of the denominator qqq? It is natural to expect that by allowing for larger denominators, we can find better and better approximations. But how much better?

Let's frame the challenge. We'll say a number xxx is "well-approximated" if we can find infinitely many rational numbers p/qp/qp/q that satisfy an inequality of the form:

∣x−pq∣<ψ(q)q\left| x - \frac{p}{q} \right| < \frac{\psi(q)}{q}​x−qp​​<qψ(q)​

Here, ψ(q)\psi(q)ψ(q) is our "ruler," an ​​approximating function​​ that dictates how good the approximation must be for a given denominator qqq. For the approximation to be non-trivial, ψ(q)\psi(q)ψ(q) should be a function that tends to zero as qqq grows. The smaller the ψ(q)\psi(q)ψ(q), the stricter our demand for accuracy. The set of numbers that meet this standard for infinitely many denominators qqq is what interests us. We'll call this set W(ψ)W(\psi)W(ψ).

Now, a physicist's first instinct when faced with an infinite space is to look for symmetries. The real number line has a beautiful one: translation. If a number xxx satisfies the inequality ∣qx−p∣<ψ(q)|qx - p| < \psi(q)∣qx−p∣<ψ(q) (an equivalent way of writing our condition), what about x+1x+1x+1? A moment's thought shows that ∣q(x+1)−(p+q)∣=∣qx+q−p−q∣=∣qx−p∣|q(x+1) - (p+q)| = |qx+q-p-q| = |qx-p|∣q(x+1)−(p+q)∣=∣qx+q−p−q∣=∣qx−p∣. So, if p/qp/qp/q is a good approximation for xxx, then (p+q)/q(p+q)/q(p+q)/q is an equally good approximation for x+1x+1x+1! This means the entire structure of our problem is periodic. Whatever happens in the interval [0,1][0,1][0,1] is simply copied and pasted onto every other integer interval. Therefore, we can simplify our world and focus our attention entirely on the unit interval [0,1][0,1][0,1]. If we can understand the nature of approximable numbers living there, we understand them everywhere.

The Borel-Cantelli Machine: From Local Chances to Global Certainty

So, we have our question: what is the "size"—the ​​Lebesgue measure​​—of the set W(ψ)W(\psi)W(ψ) within the interval [0,1][0,1][0,1]? This set seems incredibly complex, defined by a condition that must hold "infinitely often." How can we possibly get a handle on it?

Let's try a different perspective. Instead of thinking about one number's entire life story of approximations, let's freeze time at a single denominator, qqq. For this fixed qqq, what is the total length of the set of numbers in [0,1][0,1][0,1] that satisfy the approximation condition? Let's call this set EqE_qEq​. It's simply the union of tiny intervals of width 2ψ(q)/q2\psi(q)/q2ψ(q)/q centered around each rational p/qp/qp/q (for p=0,1,…,qp = 0, 1, \dots, qp=0,1,…,q).

Let's calculate the measure of this set, m(Eq)m(E_q)m(Eq​). If ψ(q)\psi(q)ψ(q) is small enough (say, ψ(q)<1/2\psi(q) < 1/2ψ(q)<1/2), these little intervals don't overlap. The distance between the centers of adjacent intervals, like p/qp/qp/q and (p+1)/q(p+1)/q(p+1)/q, is 1/q1/q1/q, while the sum of their "radii" is only 2ψ(q)/q2\psi(q)/q2ψ(q)/q. So, we can just add up their lengths! A lovely calculation shows that for q≥2q \ge 2q≥2, there are q−1q-1q−1 full intervals inside (0,1)(0,1)(0,1) and two "half-intervals" at the ends, 0 and 1. The total measure comes out to be astonishingly simple:

m(Eq)=(q−1)2ψ(q)q+2ψ(q)q=2ψ(q)m(E_q) = (q-1) \frac{2\psi(q)}{q} + 2 \frac{\psi(q)}{q} = 2\psi(q)m(Eq​)=(q−1)q2ψ(q)​+2qψ(q)​=2ψ(q)

This is a beautiful and crucial insight. The "probability" that a randomly chosen number in [0,1][0,1][0,1] satisfies our approximation criterion for a specific denominator qqq is just 2ψ(q)2\psi(q)2ψ(q).

Now, how do we bridge the gap from the probability at a single qqq to the measure of the set of numbers that satisfy the condition for infinitely many qqq? For this, mathematicians have a powerful tool, a kind of logical machine called the ​​Borel-Cantelli Lemma​​. It tells us how to think about the probability of an infinite sequence of events.

The first part of the lemma is the "easy" direction. It says that if the sum of the probabilities of a sequence of events is finite, then the probability that infinitely many of those events occur is zero. Let's feed our problem into this machine. The sum of our probabilities is ∑q=1∞m(Eq)=∑q=1∞2ψ(q)\sum_{q=1}^{\infty} m(E_q) = \sum_{q=1}^{\infty} 2\psi(q)∑q=1∞​m(Eq​)=∑q=1∞​2ψ(q). If this sum ​​converges​​ (is finite), the Borel-Cantelli lemma immediately tells us that the measure of the set W(ψ)W(\psi)W(ψ) is zero.

Think about what this means: if the total "budget" of approximation chances is finite, you can't expect to hit the jackpot infinitely often. "Almost no" numbers will be that well-approximable. What's remarkable is that this conclusion is incredibly robust. It doesn't matter if the events EqE_qEq​ are related or not. It doesn't matter if ψ(q)\psi(q)ψ(q) is a nice, smoothly decreasing function or if it jumps around erratically. As long as the sum converges, the conclusion holds.

The Divergence Dilemma: The Subtle Art of Dependence

So, what happens if the sum of probabilities is infinite?

∑q=1∞ψ(q)=∞\sum_{q=1}^{\infty} \psi(q) = \inftyq=1∑∞​ψ(q)=∞

The second part of the Borel-Cantelli lemma suggests an answer. It states that if the events are ​​independent​​ and the sum of their probabilities diverges, then the probability that infinitely many of them occur is one. It seems we're on the verge of a complete theory!

But nature is more subtle. We must ask a crucial question: are our events EqE_qEq​ independent? Does being well-approximated by a fraction with denominator qqq have any bearing on being well-approximated by a fraction with denominator rrr? Unfortunately, the answer is a resounding no. The events are deeply entangled by the beautiful, rigid structure of arithmetic. If a number xxx is very, very close to 13\frac{1}{3}31​, it is in the set E3E_3E3​. But since 13=26=39\frac{1}{3} = \frac{2}{6} = \frac{3}{9}31​=62​=93​, xxx is also likely to be in E6E_6E6​, E9E_9E9​, and so on. The "events" are correlated. The simple version of the second Borel-Cantelli lemma fails us.

This failure is not a disaster; it is a discovery. It tells us that the problem is more profound than simple probability might suggest. The proof for the divergent case is a masterpiece of modern number theory, a delicate dance to show that these correlations, while present, are not strong enough to spoil the outcome. To make the proof work, we need an extra condition: the approximating function ψ(q)\psi(q)ψ(q) must be ​​non-increasing​​.

Why is monotonicity so important? Imagine a function ψ(q)\psi(q)ψ(q) that is zero almost everywhere, but has large spikes on a very sparse set of denominators, say, powers of two. The sum ∑ψ(q)\sum \psi(q)∑ψ(q) might still diverge, but all our "approximation chances" are concentrated around rationals with denominators 2,4,8,16,…2, 4, 8, 16, \dots2,4,8,16,…. These events are highly correlated, and we might end up approximating only a small set of numbers. A non-increasing ψ\psiψ prevents this pathological clustering. It ensures the approximation opportunities are spread "democratically" across all possible denominators.

The actual proof requires a more powerful version of the Borel-Cantelli lemma, one that can handle "quasi-independent" events. It uses a ​​second-moment method​​ to show that the number of successful approximations for a typical number xxx not only goes to infinity, but does so in a way that is not too different from the average. This difficult argument proves the set W(ψ)W(\psi)W(ψ) has positive measure. A final, powerful tool called a ​​zero-one law​​ then forces the measure to be exactly 1. The message is clear: the divergent case is where the deepest arithmetic structure lies.

Khintchine's Beautiful Dichotomy

After this long journey through the mechanics of measure and probability, let's step back and admire the final edifice. The Russian mathematician Aleksandr Khintchine synthesized these ideas into a single, breathtaking theorem.

​​Khintchine's Theorem​​: Let ψ(q)\psi(q)ψ(q) be a non-increasing function. The set W(ψ)W(\psi)W(ψ) of numbers in [0,1][0,1][0,1] that can be approximated by ∣x−p/q∣<ψ(q)/q|x-p/q| < \psi(q)/q∣x−p/q∣<ψ(q)/q for infinitely many qqq has a measure that is either 0 or 1. Specifically:

  • If the series ∑q=1∞ψ(q)\sum_{q=1}^\infty \psi(q)∑q=1∞​ψ(q) ​​converges​​, then m(W(ψ))=0m(W(\psi)) = 0m(W(ψ))=0.
  • If the series ∑q=1∞ψ(q)\sum_{q=1}^\infty \psi(q)∑q=1∞​ψ(q) ​​diverges​​, then m(W(ψ))=1m(W(\psi)) = 1m(W(ψ))=1.

There is no middle ground. For any given standard of approximation (a non-increasing ψ\psiψ), either almost no numbers meet it, or almost all of them do.

Let's see this stunning dichotomy in action. Consider the classic family of approximations ψ(q)=q−τ\psi(q) = q^{-\tau}ψ(q)=q−τ for some τ>0\tau > 0τ>0. Our condition becomes ∣x−p/q∣<q−(τ+1)|x-p/q| < q^{-(\tau+1)}∣x−p/q∣<q−(τ+1). The corresponding series is the famous p-series ∑q−τ\sum q^{-\tau}∑q−τ, which converges if τ>1\tau > 1τ>1 and diverges if τ≤1\tau \le 1τ≤1. Khintchine's theorem then tells us:

  • If τ>1\tau > 1τ>1 (e.g., approximating to order q−3q^{-3}q−3 or better), the set of numbers that can be approximated this well infinitely often has ​​measure zero​​. These are exceptionally well-approximable numbers, like Liouville numbers. They exist, but they are infinitely rare.
  • If τ≤1\tau \le 1τ≤1 (e.g., approximating to order q−2q^{-2}q−2), the set of numbers that can be approximated this well infinitely often has ​​measure one​​. Almost every number can do it!

The theorem's precision is astounding. We can probe the very boundary between convergence and divergence. Consider the family of functions ψα(q)=1q(log⁡q)(log⁡log⁡q)α\psi_{\alpha}(q) = \frac{1}{q (\log q) (\log \log q)^{\alpha}}ψα​(q)=q(logq)(loglogq)α1​. Using the integral test from calculus, one can show that the series ∑ψα(q)\sum \psi_\alpha(q)∑ψα​(q) converges if α>1\alpha > 1α>1 and diverges if α≤1\alpha \le 1α≤1. Khintchine's theorem then gives us an incredibly sharp transition: the measure of the corresponding set of approximable numbers flips from 0 to 1 as the exponent α\alphaα crosses the threshold of 1. This is not just a qualitative statement; it is a quantitative law of exquisite sharpness, revealing a deep and hidden order in the chaotic world of numbers.

Applications and Interdisciplinary Connections

Now that we have acquainted ourselves with the beautiful machinery of Khintchine's theorem, let's take it for a spin. Where does it lead us? What doors does it open? You might be surprised to find that this one powerful idea—a simple criterion linking the fate of infinitely many approximations to the convergence of a sum—reverberates through many different rooms in the vast house of science, revealing unexpected connections and even forcing us to invent better tools.

The Knife-Edge of Convergence

The most immediate and striking application of Khintchine's theorem is its role as a precise "switch." It tells us that for a vast class of approximation problems, there is no middle ground. The set of numbers that can be approximated infinitely well has either full measure (it includes "almost everything") or zero measure ("almost nothing"). The theorem provides the exact condition that flips this switch.

Imagine we are exploring how well real numbers can be approximated by rationals p/qp/qp/q with an error bound that shrinks just a little faster than the standard Dirichlet approximation of 1/q21/q^21/q2. For instance, consider an error bound of the form 1q2(ln⁡q)k\frac{1}{q^2 (\ln q)^k}q2(lnq)k1​. Does a small change in the exponent kkk matter? Khintchine's theorem answers with a resounding "yes!" By examining the convergence of the series ∑ψ(q)=∑1q(ln⁡q)k\sum \psi(q) = \sum \frac{1}{q(\ln q)^k}∑ψ(q)=∑q(lnq)k1​, we find a critical threshold. The series converges if and only if k>1k > 1k>1.

So, for k=1k=1k=1, the series diverges and almost every number on the real line can be infinitely well-approximated. But if we just nudge the exponent up to k=2k=2k=2 (the next integer value), the series converges and the set of such numbers instantly collapses to have a Lebesgue measure of zero. It’s like a phase transition in physics; a tiny change in a parameter causes a dramatic, system-wide shift in behavior. This isn't just a mathematical curiosity; it's a quantitative statement about the very texture of the number line.

Beyond Measure Zero: A Glimpse into the Fractal World

This raises a nagging question. When the measure of a set of numbers flips to zero, what happens to them? Do they just vanish? Or are they still there, hiding in a way that our standard Lebesgue yardstick can't perceive? To answer this, we need a finer ruler.

This is where the concept of ​​Hausdorff dimension​​ enters the stage. Think of it as a way to measure the "complexity" or "roughness" of a set, especially those that are too "thin" to have any volume or area. A line has dimension 1, a plane has dimension 2, but a scattered set of points, like dust, can have a fractional dimension between 0 and 1.

Let's return to the set of numbers W(τ)W(\tau)W(τ) that can be approximated with an error less than q−(1+τ)q^{-(1+\tau)}q−(1+τ). Khintchine's theorem tells us that for τ>1\tau > 1τ>1, this set has Lebesgue measure zero. But is it empty? Far from it! Thanks to the Jarník-Besicovitch theorem, we know its Hausdorff dimension is dim⁡H(W(τ))=21+τ\dim_H(W(\tau)) = \frac{2}{1+\tau}dimH​(W(τ))=1+τ2​.

Notice something wonderful here. As we increase τ\tauτ, making the approximation condition stricter, the dimension 21+τ\frac{2}{1+\tau}1+τ2​ decreases from 111 (for τ=1\tau=1τ=1) towards 000. This means that while all these sets have measure zero for τ>1\tau>1τ>1, they are not all "the same size." They form a subtle hierarchy of smaller and smaller "fractal dusts."

One can even get a feel for where this formula 21+τ\frac{2}{1+\tau}1+τ2​ comes from. A heuristic argument, very much in the spirit of a physicist's back-of-the-envelope calculation, involves "counting" the number of approximating intervals and summing up their "size" raised to a power sss. The critical power sss where the sum transitions from infinity to zero gives you a candidate for the dimension. This simple calculation beautifully predicts the correct answer. It's a testament to how intuitive reasoning, guided by the right principles, can lead to profound results.

Testing the Limits: Generalizations of a Great Idea

Like scientists testing a new law of nature under extreme conditions, mathematicians immediately ask: How robust is this theorem? Does it hold in higher dimensions? What if we change the rules of the game?

A Journey into Higher Dimensions

What if we are not on a line, but in a plane or in a three-dimensional space? Can we simultaneously approximate the coordinates of a point x=(x1,…,xm)x = (x_1, \dots, x_m)x=(x1​,…,xm​) in Rm\mathbb{R}^mRm by rationals with a common denominator, i.e., ∥x−p/q∥∞<ψ(q)/q\|x - p/q\|_\infty < \psi(q)/q∥x−p/q∥∞​<ψ(q)/q? The Khintchine-Groshev theorem extends the core idea to this very setting. It states that the set of such points has measure zero or one depending on the convergence or divergence of the series ∑q=1∞ψ(q)m\sum_{q=1}^{\infty} \psi(q)^m∑q=1∞​ψ(q)m. Notice how the dimension mmm simply appears as an exponent!

But here, a beautiful simplification occurs. Recall that the one-dimensional theorem required an annoying technical condition—that the function ψ(q)\psi(q)ψ(q) be non-increasing—for the divergence case to hold. Amazingly, for dimensions m≥2m \ge 2m≥2, this condition is no longer needed! The geometry of higher dimensions provides a kind of intrinsic "mixing" that automatically ensures the quasi-independence needed for the theorem to work. It’s as if the problem becomes more well-behaved and elegant as it becomes more complex.

Shifting the Target

What if we try to approximate not just a rational number p/qp/qp/q, but a rational number that has been shifted by some amount θq\theta_qθq​? This is the world of inhomogeneous Diophantine approximation. We ask: for which xxx does ∣qx−p−θq∣<ψ(q)|qx - p - \theta_q| < \psi(q)∣qx−p−θq​∣<ψ(q) have infinitely many solutions?

Here, the story becomes richer and more subtle. The "easy" part of Khintchine's theorem (the convergence case) holds in full generality: if ∑ψ(q)\sum \psi(q)∑ψ(q) converges, the set of solutions has measure zero, no matter what the shifts θq\theta_qθq​ are. But the divergence case is a wild frontier. For it to hold, the sequence of shifts (θq)(\theta_q)(θq​) cannot be pathologically chosen. For instance, if the shifts are constant, θq=θ\theta_q = \thetaθq​=θ, the theorem holds just as in the standard case. But for arbitrary shifts, this question connects to major open problems in number theory, showing us that mathematics is a living, breathing subject with mountains yet to be climbed.

Restricting the Ammunition

Let's change the game in another way. What if we are only allowed to use a restricted set of "ammunition"? For example, what if we can only use rational numbers whose numerators are perfect squares? We investigate the set of xxx for which ∣qx−p∣<ψ(q)|qx - p| < \psi(q)∣qx−p∣<ψ(q) has infinitely many solutions where ppp must be a perfect square.

The core logic of Khintchine's theorem still guides us, but we must adapt it. The key is to account for the density of our available numerators. The number of perfect squares up to QQQ is about Q\sqrt{Q}Q​, much less than the QQQ integers available in the standard problem. By carefully re-evaluating the measure of the approximating sets, we discover a new critical series. For an approximation function ψ(q)=q−τ\psi(q) = q^{-\tau}ψ(q)=q−τ, the critical exponent for this problem is not τ=1\tau=1τ=1, but τ=1/2\tau=1/2τ=1/2. The sparseness of the squares changes the very nature of the approximation problem. This is a beautiful example of how the arithmetic properties of a set of numbers directly influence the metric, geometric properties of approximation.

The Wider Perspective: Locating Khintchine's Theorem in the Cosmos

Having explored the depths and extensions of the theorem, let's pull back and view it from a distance. Where does it fit in the grand tapestry of mathematics? We find it's not an isolated peak, but part of a magnificent mountain range, connected to other great results and fundamental concepts in profound ways.

"Almost All" vs. "All Algebraic"

Khintchine's theorem tells us that the irrationality exponent μ(α)\mu(\alpha)μ(α) is equal to 2 for "almost all" real numbers α\alphaα. This means that typical numbers are not "too" well-approximable by rationals. But what about special numbers, like 2\sqrt{2}2​ or the golden ratio ϕ\phiϕ? These are algebraic numbers, solutions to polynomial equations with integer coefficients. The set of all algebraic numbers is countable, so it has Lebesgue measure zero. Khintchine's theorem tells us nothing about them individually.

Enter Roth's theorem, a monumental achievement in its own right, which states that for every algebraic irrational number α\alphaα, its irrationality exponent is also 2. This is a stunning confluence. We have a "metric" result that holds for a set of full measure, and a "Diophantine" result that holds for a specific, thin set of measure zero, and they both give the same answer! This doesn't mean all numbers have an exponent of 2, as there are transcendental (non-algebraic) numbers, like Liouville numbers, that are extremely well-approximable and have a larger exponent. This contrast between the "almost all" behavior of typical numbers and the rigid behavior of special classes of numbers is a central theme in number theory.

A Tale of Two Zero-One Laws

Khintchine's theorem is what's known as a "zero-one law": the set of interest has measure either 0 or 1. It shares this property with another famous result: Borel's normal number theorem. A number is "normal" in base 10 if every sequence of digits appears with the expected frequency (e.g., '7' appears 10% of the time, '31' appears 1% of the time, etc.). Borel's theorem states that almost all numbers are normal.

Although both results seem similar, a deeper look reveals a fundamental structural difference. The proof of normality relies on the fact that the probability of a number's digits deviating from normal behavior shrinks exponentially fast. This leads to a series that always converges, so by the Borel-Cantelli lemma, the set of non-normal numbers always has measure zero.

In contrast, the Diophantine approximation sets in Khintchine's theorem have measures that shrink at a rate controlled by our choice of ψ(q)\psi(q)ψ(q). We can tune ψ(q)\psi(q)ψ(q) to make the corresponding series either converge or diverge. It is this "tunability" that gives rise to the rich dichotomy in Khintchine's theorem, which is absent in the normality problem.

A Bridge to the Foundations of Analysis

Finally, let's see how a deep result from number theory can illuminate a fundamental concept in calculus. Consider the function f(x)f(x)f(x) which is 1 if a number xxx violates Khintchine's theorem on continued fractions (a cousin of the theorem we've been studying) and 0 otherwise. Khintchine's theorem tells us that this set of "exceptional" numbers, let's call it EEE, has Lebesgue measure zero.

So, the function f(x)f(x)f(x) is 0 "almost everywhere." From the perspective of Lebesgue integration, which is designed to ignore sets of measure zero, the integral of f(x)f(x)f(x) is simply 0. But what happens if we try to use the old Riemann integral from introductory calculus? It turns out that the set EEE, while having measure zero, is also dense in the interval (0,1)(0,1)(0,1). This means that in any tiny subinterval, no matter how small, you can find both numbers from EEE and numbers not from EEE. Consequently, the lower Riemann sum for f(x)f(x)f(x) is always 0, but the upper Riemann sum is always 1. The two never meet, and the function is not Riemann integrable.

This function, built directly from a number-theoretic principle, provides a beautiful and concrete example of why the Lebesgue integral is a more powerful and natural tool for modern mathematics than the Riemann integral. It shows that number theory isn't just an isolated game of integers and primes; its consequences can force us to rethink and refine the very foundations of other mathematical fields.

From a simple sum to fractal dimensions, from higher-dimensional spaces to the foundations of calculus, the journey of applying Khintchine's theorem reveals the profound unity and interconnectedness of mathematical thought.