try ai
Popular Science
Edit
Share
Feedback
  • Riemann Series Theorem

Riemann Series Theorem

SciencePediaSciencePedia
Key Takeaways
  • Unlike finite sums, the order of addition is not always guaranteed for infinite sums; it holds for absolutely convergent series but fails for conditionally convergent ones.
  • The Riemann Series Theorem states that the terms of any conditionally convergent series can be reordered to make the new series sum to any chosen real number, or to diverge.
  • An infinite series is absolutely convergent if the series of its absolute values converges, and its sum remains unchanged no matter how the terms are rearranged.
  • The paradoxical rearrangement property hinges on the fact that the positive terms alone, and negative terms alone, of a conditionally convergent series both diverge to infinity.

Introduction

The commutative property of addition—the simple rule that a+ba + ba+b equals b+ab + ab+a—is one of the most fundamental concepts in mathematics. For any finite collection of numbers, the order in which we add them has no bearing on the final sum. But what happens when we step into the realm of the infinite? Can we still trust this basic intuition when our sum has no end? This question reveals a deep and surprising fissure in our understanding of infinity, a place where familiar rules break down and new, more nuanced ones emerge.

This article delves into the fascinating world of infinite series and the conditions under which their sums are stable or startlingly malleable. We will explore the Riemann Series Theorem, a profound result that formalizes this strange behavior. In "Principles and Mechanisms," you will learn to distinguish between the two fundamental types of convergent series—absolute and conditional—and understand the underlying mechanism that allows some infinite sums to be rearranged to any value imaginable. Following this, in "Applications and Interdisciplinary Connections," we will examine the practical consequences of this theorem, from a formula that predicts the outcome of specific rearrangements to the theorem's powerful extensions into the worlds of vectors and functions.

Principles and Mechanisms

In our everyday world, and indeed in much of the mathematics we first learn, the order of addition doesn't matter. If you have a pile of three apples and add a pile of two apples, you get five. If you start with the two and add the three, the result is stubbornly, reassuringly the same. This rule, the ​​commutative property of addition​​, feels as fundamental as gravity. But as we venture into the strange and beautiful world of the infinite, we find that some of our most trusted intuitions can lead us astray. The ground beneath our feet is not as solid as it seems.

The Illusion of Order: When Addition Isn't Commutative

Let's begin with a famous mathematical object, the ​​alternating harmonic series​​. It's a simple, elegant sum of fractions that gets smaller and smaller, with alternating signs:

S=1−12+13−14+15−16+⋯S = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdotsS=1−21​+31​−41​+51​−61​+⋯

Mathematicians have shown that this infinite sum converges to a precise value: the natural logarithm of 2, or approximately 0.6930.6930.693. Now, let's do something that seems perfectly innocent. Let's simply rearrange the terms. We are, after all, just adding up the same numbers, aren't we?

Consider a student, Alex, who decides to rearrange the series by taking one positive term followed by two negative terms. The list of numbers is identical, just shuffled. The new series, let's call it SnewS_{new}Snew​, begins like this:

Snew=(1−12−14)+(13−16−18)+(15−110−112)+⋯S_{new} = \left(1 - \frac{1}{2} - \frac{1}{4}\right) + \left(\frac{1}{3} - \frac{1}{6} - \frac{1}{8}\right) + \left(\frac{1}{5} - \frac{1}{10} - \frac{1}{12}\right) + \cdotsSnew​=(1−21​−41​)+(31​−61​−81​)+(51​−101​−121​)+⋯

A little bit of clever algebra reveals something astonishing. If we group the terms slightly differently, we see:

Snew=(1−12)−14+(13−16)−18+(15−110)−112+⋯S_{new} = \left(1 - \frac{1}{2}\right) - \frac{1}{4} + \left(\frac{1}{3} - \frac{1}{6}\right) - \frac{1}{8} + \left(\frac{1}{5} - \frac{1}{10}\right) - \frac{1}{12} + \cdotsSnew​=(1−21​)−41​+(31​−61​)−81​+(51​−101​)−121​+⋯

Snew=12−14+16−18+110−112+⋯S_{new} = \frac{1}{2} - \frac{1}{4} + \frac{1}{6} - \frac{1}{8} + \frac{1}{10} - \frac{1}{12} + \cdotsSnew​=21​−41​+61​−81​+101​−121​+⋯

Look closely at this result. Every term is exactly half of the corresponding term in a new series. If we factor out 12\frac{1}{2}21​, we get:

Snew=12(1−12+13−14+15−16+⋯ )=12SS_{new} = \frac{1}{2} \left(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots \right) = \frac{1}{2} SSnew​=21​(1−21​+31​−41​+51​−61​+⋯)=21​S

We have arrived at a spectacular contradiction! By simply changing the order of addition, we've shown that the sum is now half of what it was before. Our student Alex concluded, quite reasonably, that S=12SS = \frac{1}{2} SS=21​S, which would mean S=0S=0S=0. But we know S=ln⁡(2)S = \ln(2)S=ln(2), which is not zero. What has gone wrong? The error lies in the very first assumption: that the commutative property of addition, so reliable for finite sums, holds true for all infinite ones. It doesn't. And understanding when it fails versus when it holds is the key to this entire topic.

The Great Divide: Absolute Stability vs. Conditional Malleability

It turns out that infinite series come in two fundamental flavors. The distinction between them is the dividing line between order-independent stability and a bizarre, almost magical, malleability.

First, there are the "unshakable" series. Consider a series like:

∑n=1∞(−1)n+1n2=1−14+19−116+⋯\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2} = 1 - \frac{1}{4} + \frac{1}{9} - \frac{1}{16} + \cdots∑n=1∞​n2(−1)n+1​=1−41​+91​−161​+⋯

If we try to play the same game with this series, we will fail. No matter how you rearrange its terms, the sum will always converge to the exact same value. Why is this series so robust? The secret lies in what happens when we make all its terms positive and consider the sum of their absolute values:

∑n=1∞∣(−1)n+1n2∣=∑n=1∞1n2=1+14+19+116+⋯\sum_{n=1}^{\infty} \left| \frac{(-1)^{n+1}}{n^2} \right| = \sum_{n=1}^{\infty} \frac{1}{n^2} = 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \cdots∑n=1∞​​n2(−1)n+1​​=∑n=1∞​n21​=1+41​+91​+161​+⋯

This series, known as a ppp-series with p=2p=2p=2, converges to a finite number (π26\frac{\pi^2}{6}6π2​, in fact). When the series of absolute values converges, we say the original series is ​​absolutely convergent​​. These series are the bedrock of stability. You can shuffle their terms in any way you please, and the sum remains unchanged. They behave just as our intuition expects.

On the other side of the divide are the "malleable" series. Let's go back to the alternating harmonic series. What happens if we sum its absolute values?

∑n=1∞∣(−1)n+1n∣=∑n=1∞1n=1+12+13+14+⋯\sum_{n=1}^{\infty} \left| \frac{(-1)^{n+1}}{n} \right| = \sum_{n=1}^{\infty} \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots∑n=1∞​​n(−1)n+1​​=∑n=1∞​n1​=1+21​+31​+41​+⋯

This is the famous harmonic series, and it ​​diverges​​—its sum grows to infinity without bound. The original alternating series converges, but only because of a delicate cancellation between its positive and negative terms. When a series converges but its absolute version diverges, we say it is ​​conditionally convergent​​.

This is the class of series that exhibits the strange behavior we saw earlier. Their convergence is conditional on the specific order of their terms. Other examples include series like ∑(−1)nn\sum \frac{(-1)^n}{\sqrt{n}}∑n​(−1)n​ and ∑(−1)nln⁡(n)\sum \frac{(-1)^n}{\ln(n)}∑ln(n)(−1)n​. For the general family of alternating ppp-series, ∑(−1)n+1np\sum \frac{(-1)^{n+1}}{n^p}∑np(−1)n+1​, this strange world exists precisely when 0<p≤10 \lt p \le 10<p≤1. The moment ppp becomes greater than 1, the series becomes absolutely convergent and order ceases to matter. This sharp boundary at p=1p=1p=1 separates the world of the predictable from the world of the infinitely rearrangeable.

The Infinite Bank Account: The Mechanism of Rearrangement

So, what is the underlying mechanism that gives conditionally convergent series this almost magical property? The reason is both simple and profound.

Let's think of an infinite series as an infinite sequence of transactions on a bank account. Positive terms are deposits, and negative terms are withdrawals.

For an ​​absolutely convergent series​​, the sum of all possible deposits is a finite value, say PPP, and the sum of the absolute values of all withdrawals is also a finite value, QQQ. The final balance is simply P−QP - QP−Q, and it doesn't matter in what order you process the transactions; the final sum is fixed.

But for a ​​conditionally convergent series​​, something amazing is true: the sum of its positive terms alone diverges to +∞+\infty+∞, and the sum of its negative terms alone diverges to −∞-\infty−∞. This is not an assumption; it's a necessary consequence. If, for instance, the positive terms summed to a finite number, then for the whole series to converge, the negative terms would also have to sum to a finite number. This would force the entire series to be absolutely convergent, which it is not.

So, a conditionally convergent series is like having a bank account with an infinite supply of money to deposit and an infinite line of credit to withdraw from. With this power, you can reach any balance you desire! This is the essence of the ​​Riemann Rearrangement Theorem​​.

Suppose you want the series to sum to the number 1,000,000. You simply start adding the positive terms (deposits) one by one. Since they sum to infinity, you are guaranteed to eventually cross 1,000,000. The moment you do, you switch tactics. You start adding the negative terms (withdrawals). Since they sum to −∞-\infty−∞, you are guaranteed to eventually dip back below 1,000,000. Then you switch back to adding positive terms until you are just over, then negative terms until you are just under, and so on.

Crucially, for the original series to converge at all, its terms must be marching towards zero. This means that each time you overshoot or undershoot your target of 1,000,000, the size of the correction gets smaller and smaller. You are inevitably spiraling in on your target. This procedure works for any real number you can imagine, from π\piπ to −42-42−42 to eee. Indeed, for a series like ∑(−1)nln⁡n\sum \frac{(-1)^n}{\ln n}∑lnn(−1)n​, the set of all possible sums you can achieve through rearrangement is the entire set of real numbers, R\mathbb{R}R.

A Universe of Sums

The power of rearrangement doesn't stop at just achieving any target value. By being more deliberate with our "deposits" and "withdrawals," we can make the new series diverge to +∞+\infty+∞ (by adding positive terms much more frequently than negative ones) or to −∞-\infty−∞.

One might then wonder, can we create even more exotic behavior? For example, could we rearrange the terms so that the partial sums oscillate, getting close to 0, then close to 1, then back to 0, and so on, creating a sequence of sums with exactly two accumulation points? The answer, beautifully, is no. Because the individual terms of the series must approach zero, the "jumps" in the sequence of partial sums become infinitesimally small. If the partial sums visit the neighborhood of 0 and the neighborhood of 1 infinitely often, they must, by necessity, also visit every point in between. The set of accumulation points cannot be two discrete dots; it must be the entire continuous interval [0,1][0, 1][0,1].

This deep dive into the nature of infinite sums leaves us with a stark and powerful dichotomy. A series is either absolutely convergent, in which case it is stable, robust, and all its rearrangements converge to the same value. Or it is conditionally convergent, in which case it is a wild, untamed thing, capable of being molded to sum to any value, or to diverge. There is no middle ground. You cannot, for instance, have a series where every rearrangement converges but some converge to different values. The very existence of rearrangements with different sums is a symptom of conditional convergence, and conditional convergence implies the existence of rearrangements that diverge. The two properties are mutually exclusive.

The journey from a simple parlor trick with the alternating harmonic series has led us to a fundamental truth about the infinite. It has revealed that the comfortable rules of the finite world are but a special case, and that in the realm of the infinite, concepts like "sum" are more subtle, more flexible, and ultimately, more beautiful than we could have ever imagined.

Applications and Interdisciplinary Connections

In the previous chapter, we stumbled upon a rather shocking secret of the infinite: for certain series, the sum you get depends entirely on the order you add up the terms. This is the wild world of conditional convergence, a realm that seems to defy the common-sense arithmetic we learned in school. With absolutely convergent series, everything is tame and predictable; shuffle the terms all you like, the sum remains stubbornly fixed. But with a conditionally convergent series, you, the mathematician, are handed a strange and powerful new ability. You are no longer a mere spectator, calculating a pre-ordained result. You become a sculptor, and the infinite terms of the series are your clay.

The question we must now ask is: what can we do with this power? If we can change the sum, how do we control it? And does this peculiar property have any echoes in the wider world of science and mathematics? This is not just a strange pathology; it is a gateway to a deeper understanding of convergence, infinity, and structure.

The Forger's Toolkit: Crafting Any Number You Desire

Let’s start with the most direct application. If you don't like the sum of a conditionally convergent series, you can simply change it. How? The proof of Riemann's theorem doesn't just tell us that it's possible; it gives us the recipe.

Imagine you have two infinite piles of numbers, one with all the positive terms of your series and one with all the negative terms. Because the series is conditionally convergent, we know a crucial fact: both of these piles, if summed on their own, would rocket off to infinity. You have an inexhaustible supply of both positive and negative "stuff" to work with.

Now, suppose you want your series to add up to, say, the number 2. The algorithm is delightfully simple and intuitive. You start at zero. You begin picking numbers from your positive pile, adding them to your running total one by one. You keep going until your sum first creeps past 2. Then, you stop and turn to the negative pile. You start adding negative terms, one by one, until your sum dips back below 2. Then you switch back to the positive pile, and so on. You swing back and forth, overshooting and undershooting your target of 2 in ever-decreasing steps. Because the individual terms of the original series must march towards zero, your overshoots and undershoots get smaller and smaller, and your rearranged sum careers drunkenly but inevitably towards your chosen target of 2. You can use this method to make the series converge to any number you wish!

This isn't just a theoretical fancy. Take the famous alternating harmonic series, 1−12+13−14+…1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots1−21​+31​−41​+…, which naturally converges to the natural logarithm of 2, or ln⁡(2)≈0.693\ln(2) \approx 0.693ln(2)≈0.693. By systematically taking one positive term for every four negative terms, one can rearrange the series so that it converges to a completely different value: exactly 0. The same principle applies to other series, like the Gregory series for π\piπ, which can be coaxed into summing to new, unexpected values through careful reordering.

A Master Formula for Rearrangement

This process seems almost like an art, a delicate dance between positive and negative terms. But can we make it more precise? Is there a governing rule that connects the way we rearrange the series to the sum we get? For the alternating harmonic series, the answer is a beautiful and resounding yes.

It turns out that what matters is the asymptotic ratio of positive to negative terms you use. Let's say you construct a new series by picking, in the long run, rrr positive terms for every one negative term. So if you pick them in roughly equal measure, r=1r=1r=1. If you favor the positive terms heavily, maybe r=4r=4r=4. A remarkable calculation shows that the new sum, S′S'S′, is given by a wonderfully simple formula: S′=ln⁡(2)+12ln⁡(r)S' = \ln(2) + \frac{1}{2}\ln(r)S′=ln(2)+21​ln(r)

Look at this formula! It's a thing of beauty. It tells us everything. If we keep a balanced ratio of positive to negative terms, r=1r=1r=1, then ln⁡(1)=0\ln(1)=0ln(1)=0, and we get back the original sum, S′=ln⁡(2)S' = \ln(2)S′=ln(2). But if we decide to favor positives, say by taking four positive terms for every negative one (r=4r=4r=4), the new sum will be ln⁡(2)+12ln⁡(4)=ln⁡(2)+ln⁡(2)=2ln⁡(2)\ln(2) + \frac{1}{2}\ln(4) = \ln(2) + \ln(2) = 2\ln(2)ln(2)+21​ln(4)=ln(2)+ln(2)=2ln(2). We've doubled the sum! If we favor negative terms, say with a ratio of r=1/4r = 1/4r=1/4, the new sum becomes ln⁡(2)+12ln⁡(1/4)=ln⁡(2)−ln⁡(2)=0\ln(2) + \frac{1}{2}\ln(1/4) = \ln(2) - \ln(2) = 0ln(2)+21​ln(1/4)=ln(2)−ln(2)=0. This formula provides a precise, quantitative map between the structure of our rearrangement and the resulting sum. The chaos has a pattern, and it is logarithmic.

The Boundaries of Chaos: When Order is Preserved

By now, you might be feeling a bit of mathematical vertigo. Does any change in the order of a conditionally convergent series lead to this kind of anarchy? If I just swap two terms, does the sum change?

Here, nature provides a reassuring dose of stability. The spectacular effects of the Riemann theorem are not triggered by just any permutation. They require drastic, "long-range" reshuffling of the terms. Consider a permutation that only moves terms a little bit. Imagine a long line of people, and you ask them to shuffle around, but no one is allowed to move more than, say, ten spots away from their original position. Mathematically, we call this a "bounded displacement permutation".

When you apply such a "gentle" rearrangement to a conditionally convergent series, something wonderful happens: nothing. The series still converges, and it converges to the exact same sum it did before. The reason is intuitive: a bounded shuffle only changes any given partial sum by a finite number of terms, and these terms are all from the "tail" of the original series, where they are becoming vanishingly small. The disturbance is too localized and feeble to change the infinite sum. So, to invoke the strange magic of Riemann's theorem, you must perform a truly global rearrangement, taking terms from the very beginning and flinging them millions of places down the line. The chaos is powerful, but it must be deliberately unleashed.

From Numbers to New Worlds: Wider Connections

The story does not end with the real number line. This distinction between absolute and conditional convergence—between tame and wild infinity—is a fundamental theme that echoes throughout many branches of mathematics and science.

First, let's step up from one dimension to two or three. What if the terms of our series are not numbers, but vectors? Think of a series of tiny displacement vectors, or perhaps a sequence of forces acting on an object. If the series of vector lengths is absolutely convergent, all is well; the order doesn't matter. But if it's conditionally convergent, what happens? Can we rearrange the vectors to make their sum point to any location in space?

The answer, given by the powerful Lévy-Steinitz theorem, is even more nuanced and beautiful. The set of all possible sums you can get by rearranging the vectors is no longer "everything" or "just one point." Instead, it forms a complete geometric structure: a line, a plane, or the entire space. For example, if you have a conditionally convergent series of vectors in a plane, it might be that you can rearrange them to sum to any vector on a specific line, but you can't get off that line. Or perhaps you can get to any vector in the entire plane! The one thing that's always possible, if the convergence isn't absolute, is to find a rearrangement that makes the sum of vectors spiral or shoot off to infinity—a divergent rearrangement always exists.

Next, consider a truly mind-bending extension: a series of functions. The Fourier series, used in everything from signal processing to quantum mechanics, is a prime example. It builds up a complex function or signal by adding together an infinite series of simple sine and cosine waves. Often, these series are conditionally convergent. Now we ask: what happens if we rearrange the order in which we add the waves?

Let's say each function in our series, fn(x)f_n(x)fn​(x), is perfectly smooth and continuous. Their sum, F(x)F(x)F(x), might also be a nice continuous function. If we apply a single, fixed rearrangement to the series, we get a new sum function, G(x)G(x)G(x). Must G(x)G(x)G(x) also be continuous? The startling answer is no. It is entirely possible to devise a rearrangement that creates a new sum function that is discontinuous. By only reordering the terms, you can create a sudden jump or a break in a function that was previously smooth. This demonstrates that the power to rearrange is not just the power to change a value, but to change the fundamental character and properties of the mathematical object you are building.

This principle of "wildness" is remarkably robust. If you take a conditionally convergent series and add an absolutely convergent one to it, the resulting series is still "wild"; it can be rearranged to sum to any real number. The stability of the absolute series is completely overwhelmed by the flexibility of the conditional one. Furthermore, this wildness is often preserved under transformations. If you have a conditionally convergent series ∑an\sum a_n∑an​ where the terms go to zero, and you apply a function that behaves like the identity for small inputs (like f(x)=arctan⁡(x)f(x) = \arctan(x)f(x)=arctan(x)), the new series ∑arctan⁡(an)\sum \arctan(a_n)∑arctan(an​) is often also conditionally convergent, inheriting the full potential for rearrangement from its parent series.

A Tale of Two Infinities

The Riemann Series Theorem, then, is far more than a mathematical curiosity. It is a profound lesson about the nature of infinity. It teaches us that we must be exquisitely careful when dealing with infinite sums. It neatly cleaves the world of infinite series into two vastly different universes.

In the first universe, that of absolute convergence, infinity is tame. The commutative law of addition, which we hold so dear from our finite experience, continues to hold. The sum is a robust, solid property of the set of terms.

In the second universe, that of conditional convergence, infinity is wild and teeming with possibility. Here, the order is paramount. The very notion of "the sum" becomes ambiguous, replaced by a landscape of potential sums that the mathematician can explore and select from. This isn't a failure of mathematics; it is the discovery of a richer, more subtle structure. It is a testament to the fact that infinity is not just a very large number. It is a different concept entirely, with its own rules, its own surprises, and its own inherent, chaotic beauty.