try ai
Popular Science
Edit
Share
Feedback
  • Rearrangement Theorem

Rearrangement Theorem

SciencePediaSciencePedia
Key Takeaways
  • Absolutely convergent series have a sum that is immune to the reordering of their terms.
  • Conditionally convergent series can be rearranged to sum to any real number, a result known as the Riemann Rearrangement Theorem.
  • This malleability is possible because the subseries of positive terms and the subseries of negative terms both diverge independently to infinity.
  • The principle extends to vector spaces, where the set of possible sums from rearrangement forms a line, a plane, or a single point.

Introduction

Does the order in which you add numbers matter? For a finite list, the answer is no. But when a list of numbers stretches to infinity, our intuition can fail spectacularly. Infinite series introduce a world where the very concept of a "sum" depends on the order of its terms. This article tackles this fascinating paradox by exploring the Rearrangement Theorem, uncovering the profound difference between series that are unshakably stable and those that are magically malleable.

This exploration is divided into two parts. In "Principles and Mechanisms," we will delve into the core distinction between absolute and conditional convergence, revealing why the sum of some series is fixed while others can be reordered to equal any number imaginable. Then, in "Applications and Interdisciplinary Connections," we will see the surprising consequences of this theorem, from constructing new numbers to defining geometric structures in higher dimensions and influencing our understanding of function series.

Principles and Mechanisms

Imagine you have a big bag of numbers, some positive, some negative. You're tasked with adding them all up. In our everyday experience, with a finite number of items, the order in which you add them doesn't matter. Tallying up your grocery bill from top to bottom gives the same result as bottom to top. It seems like a fundamental truth of arithmetic. But what happens when the bag contains an infinite number of items? Does the order still not matter? Here, we venture into the strange and beautiful world of the infinite, where our intuition is both a guide and something to be joyfully questioned. We find that the answer is, "It depends!" And in that dependency lies a profound distinction that separates the unshakably stable from the magically malleable.

The Unshakable Sum: Absolute Convergence

Let's first look at the well-behaved cases. Consider a series like the one in ****: S=−14+116−164+⋯=∑n=1∞(−14)nS = -\frac{1}{4} + \frac{1}{16} - \frac{1}{64} + \dots = \sum_{n=1}^{\infty} \left(-\frac{1}{4}\right)^nS=−41​+161​−641​+⋯=∑n=1∞​(−41​)n This is a geometric series, and a quick calculation shows its sum is exactly −15-\frac{1}{5}−51​. Now, what if we scrambled the terms? Suppose we take the first ten terms, throw them in a hat, and pull them out in a random order, placing them at the beginning of the series before continuing with the rest. What's the new sum? The surprising answer is that it's still, stubbornly, −15-\frac{1}{5}−51​. You could perform an infinitely complex shuffle, and the sum would refuse to budge.

Why is this series so robust? The secret lies not in the series itself, but in the series of its absolute values: ∑n=1∞∣(−14)n∣=∑n=1∞(14)n=14+116+164+…\sum_{n=1}^{\infty} \left| \left(-\frac{1}{4}\right)^n \right| = \sum_{n=1}^{\infty} \left(\frac{1}{4}\right)^n = \frac{1}{4} + \frac{1}{16} + \frac{1}{64} + \dots∑n=1∞​​(−41​)n​=∑n=1∞​(41​)n=41​+161​+641​+… This series of positive numbers also converges (to 13\frac{1}{3}31​). When the series formed by taking the absolute value of each term converges, we say the original series is ​​absolutely convergent​​. This is the property that acts as a guarantee of stability.

A fundamental theorem of mathematics states that if a series is absolutely convergent, any rearrangement of its terms will converge to the exact same sum ​​. It’s as if the sum is an intrinsic property of the collection of terms, independent of how they are arranged. For another example, take the series S=∑n=1∞(−1)n+1n2=1−14+19−116+…S = \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2} = 1 - \frac{1}{4} + \frac{1}{9} - \frac{1}{16} + \dotsS=∑n=1∞​n2(−1)n+1​=1−41​+91​−161​+…. If we check its absolute values, we get ∑n=1∞1n2\sum_{n=1}^{\infty} \frac{1}{n^2}∑n=1∞​n21​, a famous series that we know converges (to π26\frac{\pi^2}{6}6π2​, a result discovered by Euler). Because it converges absolutely, the sum is rock-solid. No amount of shuffling will change its value ​​. Absolute convergence is the mathematical equivalent of having a system with such strong internal connections that its overall state is immune to permutation.

The Magician's Trick: Conditional Convergence

This stability is comforting, but nature is not always so orderly. What happens if a series is walking a finer line? What if the series itself converges, but the series of its absolute values does not? Such a series is called ​​conditionally convergent​​. It converges only on the condition that its terms are arranged in their specific order, thanks to a delicate cancellation between positive and negative terms.

The most famous example is the alternating harmonic series: S=1−12+13−14+15−…S = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \dotsS=1−21​+31​−41​+51​−… This series converges to a specific value, ln⁡(2)\ln(2)ln(2). However, its series of absolute values is the harmonic series, 1+12+13+14+…1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \dots1+21​+31​+41​+…, which famously diverges to infinity. So, the alternating harmonic series is conditionally convergent.

And now for the magic trick. What happens if we rearrange its terms? The German mathematician Bernhard Riemann discovered something astonishing: you can make it add up to anything you want. Want the sum to be 100? There's a rearrangement for that. Want it to be −π- \pi−π? There's a rearrangement for that, too. Want it to just fly off to +∞+\infty+∞ or −∞-\infty−∞? You can do that as well! This is the content of the ​​Riemann Rearrangement Theorem​​. It tells us that the "sum" of a conditionally convergent series is not a fixed property, but an artifact of the order in which we add its terms.

This distinction is the key. To find out if a series is a playground for rearrangement, we just need to determine if it's conditionally convergent ​​. For instance, the series ∑cos⁡(nπ)n=∑(−1)nn\sum \frac{\cos(n\pi)}{\sqrt{n}} = \sum \frac{(-1)^n}{\sqrt{n}}∑n​cos(nπ)​=∑n​(−1)n​ is conditionally convergent, so its sum is malleable. The series ∑(−1)nln⁡(n)\sum \frac{(-1)^n}{\ln(n)}∑ln(n)(−1)n​ is also conditionally convergent. We can even create a "dial" to move between the world of order and the world of chaos. Consider the series ∑(−1)n+1np\sum \frac{(-1)^{n+1}}{n^p}∑np(−1)n+1​ ​​.

  • If we turn the dial so that p>1p > 1p>1, the series is absolutely convergent. The sum is fixed.
  • If we turn the dial to the range 0p≤10 p \le 10p≤1, the series becomes conditionally convergent. Suddenly, the sum becomes a matter of choice, a creative act of rearrangement. The point p=1p=1p=1 marks the precipice, the boundary between stability and this beautiful, infinite flexibility.

How the Trick Works: A Tale of Two Infinities

How can this be? How can simply reordering terms conjure any number out of thin air? The mechanism is surprisingly intuitive once you see it. The secret lies in a deeper property of conditionally convergent series: the series composed of only its positive terms diverges to +∞+\infty+∞, and the series composed of only its negative terms diverges to −∞-\infty−∞ ****.

Think of it this way: you have two infinite stockpiles. One contains all the positive terms (1,13,15,…1, \frac{1}{3}, \frac{1}{5}, \dots1,31​,51​,…), and the other contains all the negative terms (−12,−14,−16,…-\frac{1}{2}, -\frac{1}{4}, -\frac{1}{6}, \dots−21​,−41​,−61​,…). The original series converges to ln⁡(2)\ln(2)ln(2) because it picks one from the positive pile, then one from the negative, in a very careful, balanced way.

But Riemann's theorem tells us we are free to pick from these piles in any order we wish, as long as we eventually use up every term. So, let's say we want to make the sum converge to 10. We can follow a simple algorithm:

  1. Keep taking numbers from the positive stockpile and adding them to our sum until the total first exceeds 10. We are guaranteed to be able to do this, because the positive terms alone sum to infinity.
  2. Now, our sum is a little over 10. So, we start taking numbers from the negative stockpile and adding them until the total first dips below 10. Again, we are guaranteed to be able to do this, because the negative terms sum to −∞-\infty−∞.
  3. Repeat. Go back to the positive pile and add terms until you're just over 10 again. Then back to the negative pile until you're just under 10.

Since the individual terms of the series are getting smaller and smaller (approaching zero), the amount by which we overshoot and undershoot our target of 10 gets smaller with each cycle. Our partial sums are oscillating around 10 with ever-decreasing amplitude, inevitably converging to 10.

We can even design a rearrangement to make the series diverge, as shown in ****. The strategy is to let our target keep rising. First, take enough positive terms to exceed 2. Add the first negative term. Then take enough new positive terms to exceed 3. Add the second negative term. And so on. At stage kkk, we ensure our sum surpasses k+1k+1k+1. The single negative terms we add are not nearly enough to counteract the relentless climb. In a stunning display of the hidden order within this chaos, one can even show that for this specific procedure, the ratio of the number of positive terms used in one stage to the previous stage approaches the constant e2≈7.389e^2 \approx 7.389e2≈7.389. The structure of the infinite is full of such surprises!

The Limits of Chaos: When Rearrangement Fails

This power of rearrangement seems almost limitless. Can we create any behavior? For example, could we rearrange the terms so that the sequence of partial sums never settles down, but instead forever oscillates, with the even-numbered partial sums converging to 2 and the odd-numbered ones converging to -2?

Here we find a crucial boundary. The answer is no. Such a rearrangement is impossible ****. The reason is beautifully simple. For any series, convergent or not, the nnn-th term bnb_nbn​ is just the difference between the nnn-th partial sum tnt_ntn​ and the (n−1)(n-1)(n−1)-th partial sum tn−1t_{n-1}tn−1​. If we had lim⁡k→∞t2k=2\lim_{k \to \infty} t_{2k} = 2limk→∞​t2k​=2 and lim⁡k→∞t2k−1=−2\lim_{k \to \infty} t_{2k-1} = -2limk→∞​t2k−1​=−2, then the even-numbered terms of our series would have to behave like: lim⁡k→∞b2k=lim⁡k→∞(t2k−t2k−1)=2−(−2)=4\lim_{k \to \infty} b_{2k} = \lim_{k \to \infty} (t_{2k} - t_{2k-1}) = 2 - (-2) = 4limk→∞​b2k​=limk→∞​(t2k​−t2k−1​)=2−(−2)=4 This is a fatal flaw. A rearrangement is just a reshuffling of the original terms. Since the original terms (−1)n+1n\frac{(-1)^{n+1}}{n}n(−1)n+1​ go to 0, any permutation of them must also go to 0. But we've just shown that this hypothetical rearrangement would require a subsequence of its terms to approach 4. This is a contradiction. The magic of rearrangement is powerful, but it cannot violate the fundamental rule that the terms themselves must wither away to nothing.

A Deeper Echo: The Rearrangement Principle in Other Worlds

This idea—that the structure of a system depends on its internal properties of "cancellation"—is not confined to infinite series. It is a deep pattern that echoes in other branches of science, like the group theory used to describe molecular symmetry in chemistry and physics.

A group is a set of operations (like rotations or reflections of a molecule) with a rule for combining them. We can summarize this in a group multiplication table. A striking property, also called the ​​rearrangement theorem​​, states that in any such table, each row and each column is a perfect permutation of the group's elements; no element is repeated, and none is missing ****.

What is the source of this rigid order? It is the existence of a unique ​​inverse​​ for every element in the group ****. If we form a row by multiplying every element XXX by a fixed element AAA, we get the set of products {AX}\{AX\}{AX}. If we had a repeat, say AX1=AX2AX_1 = AX_2AX1​=AX2​, the existence of an inverse A−1A^{-1}A−1 would allow us to "cancel" the AAA on both sides, proving that X1X_1X1​ must equal X2X_2X2​. This guaranteed cancellation forbids any repetition, enforcing a strict, stable structure on the table.

Here we see a beautiful parallel. The absolute stability of an absolutely convergent series and the rigid structure of a group multiplication table both stem from a powerful, unambiguous notion of "cancellation" (the convergence of absolute values on one hand, the existence of inverses on the other). The wild, creative flexibility of a conditionally convergent series arises precisely because this cancellation is so fragile and ambiguous that the very path one takes—the order of operations—defines the final result. In both mathematics and the physical world, the rules of rearrangement reveal the deepest properties of the system itself.

Applications and Interdisciplinary Connections

After our journey through the principles and mechanisms of infinite series, one might be left with a feeling of unease. The Riemann Rearrangement Theorem, which states that the terms of a conditionally convergent series can be reordered to sum to any real number, feels less like a theorem and more like a mathematical prank. It seems to suggest that the very idea of a "sum" for certain infinite collections is a fragile illusion. But in science, as in life, the most curious and seemingly paradoxical results are often the gateways to a deeper understanding. This theorem is not a bug; it is a profound feature of the infinite, and its consequences ripple across mathematics and its applications.

The Art of Creation: Forging New Numbers

Let us first appreciate the constructive power of this theorem. It is not merely a statement of possibility; it is a recipe for creation. Imagine you have a collection of building blocks of varying heights, some positive (to build up) and some negative (to tear down). The alternating harmonic series, 1−12+13−14+…1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots1−21​+31​−41​+…, is one such collection, and if you stack the blocks in this exact order, the final height of your tower will be precisely ln⁡(2)\ln(2)ln(2).

But the rearrangement theorem tells us this pile of blocks is magical. If we are clever about the order in which we stack them, we can make the final height of the tower anything we want. Let's see how. What if we decide to be a bit more pessimistic and, for every one positive block, we grab the next two available negative ones? We start with 111, then subtract 12\frac{1}{2}21​ and 14\frac{1}{4}41​. Then we add the next positive block, 13\frac{1}{3}31​, and subtract the next two negatives, 16\frac{1}{6}61​ and 18\frac{1}{8}81​, and so on. The series now looks like 1−12−14+13−16−18+…1 - \frac{1}{2} - \frac{1}{4} + \frac{1}{3} - \frac{1}{6} - \frac{1}{8} + \dots1−21​−41​+31​−61​−81​+…. You might guess this new arrangement will surely result in a smaller sum. It does! By carefully grouping the terms and relating them back to the original harmonic series, one can show that this new tower rises to a height of exactly 12ln⁡(2)\frac{1}{2}\ln(2)21​ln(2). We have created a new number from the same old parts, just by changing the assembly instructions.

This is no fluke. We could, for instance, decide to take three positive terms for every two negative terms from the alternating harmonic series. This specific recipe results in a sum of 12ln⁡(6)\frac{1}{2}\ln(6)21​ln(6). In fact, the possibilities are limitless. For any conditionally convergent series, such as ∑n=2∞(−1)nln⁡n\sum_{n=2}^{\infty} \frac{(-1)^n}{\ln n}∑n=2∞​lnn(−1)n​, the set of all possible values one can obtain through rearrangement is not some discrete set of numbers or a bounded interval; it is the entire real line, R\mathbb{R}R. The theorem gives us infinite creative control.

Taming the Wild: On Stability and Restraint

This power to create seems almost paradoxical. Does it mean addition is broken for infinite sums? Not at all. It just means we have to be more careful. The trick only works for a special class of series—the "conditionally convergent" ones, where the series converges but the series of absolute values does not. Their cousins, the "absolutely convergent" series, are perfectly well-behaved. For them, the sum of the positive terms and the sum of the negative terms are both finite. No matter how you shuffle the deck, the sum remains the same. They are stable, predictable, and "tame."

But what happens if we mix them? What if we add a tame, absolutely convergent series to a wild, conditionally convergent one?. Does the tame series calm the wild one down? Not in the slightest! The wildness wins. The resulting sum of the two series is still conditionally convergent, and it can be rearranged to sum to any number on the real line. The chaotic nature of the conditional series completely overwhelms the stability of the absolute one, a testament to the subtle and powerful nature of the divergent harmonic series lurking within.

So, is there any way to restrain this wildness? Can we impose rules on our shuffling that preserve the sum? The answer is a beautiful and surprising "yes." The magic of the Riemann theorem hinges on the ability to move terms arbitrarily far from their original positions. What if we forbid this? Imagine a permutation σ\sigmaσ that only shuffles terms locally, such that any term originally at position nnn is moved to a new position σ(n)\sigma(n)σ(n) that is no more than a fixed distance MMM away, i.e., ∣n−σ(n)∣≤M|n - \sigma(n)| \leq M∣n−σ(n)∣≤M for all nnn. Such a "bounded displacement permutation" is not powerful enough to change the sum. For any conditionally convergent series, if you only rearrange its terms with this bounded displacement rule, the new series is guaranteed to converge to the exact same sum as the original. This remarkable result clarifies that the theorem's power comes not just from reordering, but from the long-distance relocation of terms, which is necessary to selectively deplete the reserves of positive or negative terms.

New Dimensions: Rearrangements in Vector Spaces

So far, we have played this game on a one-dimensional number line. But the universe is not one-dimensional. What happens if our terms are not just numbers, but vectors, pointing to locations in a plane? If we have a conditionally convergent series of vectors in R2\mathbb{R}^2R2, can we rearrange them to sum to a vector pointing to any location in the plane?

The answer, provided by the Lévy–Steinitz theorem, is even more intricate and beautiful. The set of all possible sums is no longer necessarily the entire space. Instead, it must be an "affine subspace"—which in R2\mathbb{R}^2R2 means it is either a single point, a straight line, or the entire plane. A series is absolutely convergent if and only if this set of sums is just a single point. For a conditionally convergent series, the set is larger. This implies, crucially, that we can find at least two different rearrangements that converge to two different vectors. By cleverly alternating between these two rearrangements, we can construct a new rearrangement whose partial sums bounce back and forth, never settling down. Thus, for any conditionally convergent vector series, there always exists a rearrangement that diverges.

Let's make this concrete. Suppose we construct a series of vectors where the horizontal (xxx) components form a conditionally convergent series (like the terms (−1)kk\frac{(-1)^k}{k}k(−1)k​) and the vertical (yyy) components form an absolutely convergent series (like −αks-\frac{\alpha}{k^s}−ksα​ for s>1s > 1s>1). What happens when we rearrange the series of vectors? The yyy components are tame; no matter how we shuffle them, their total sum is fixed by the unchangeable sum of the absolutely convergent series, say C=−2αζ(s)C = -2\alpha\zeta(s)C=−2αζ(s). But the xxx components are wild! We can rearrange the vectors in such a way that the xxx components of the partial sums can be made to converge to any real number we desire. The astonishing result is that the set of all possible sums for our rearranged vector series forms the horizontal line y=Cy=Cy=C. We are pinned to a specific "altitude," but we are given complete freedom to slide to any position left or right. The theorem's power is constrained, but in a beautiful, geometric way.

A Symphony of Functions: Rearrangement in Fourier Analysis

The implications of rearrangement extend into the realm of functions, particularly in Fourier analysis, which is fundamental to signal processing, quantum mechanics, and countless other areas of physics and engineering. Many important functions can be represented as infinite series of sines and cosines.

Consider the series S(x)=∑n=1∞(−1)nncos⁡(nx)S(x) = \sum_{n=1}^{\infty} \frac{(-1)^n}{n}\cos(nx)S(x)=∑n=1∞​n(−1)n​cos(nx). For any fixed value of xxx where the series is conditionally convergent, the Riemann Rearrangement Theorem should apply. So where does this hold?

The series converges for any xxx that is not an odd multiple of π\piπ. The series of absolute values, ∑n=1∞∣cos⁡(nx)∣n\sum_{n=1}^{\infty} \frac{|\cos(nx)|}{n}∑n=1∞​n∣cos(nx)∣​, can be shown to diverge for all xxx. Therefore, for any xxx that is not an odd multiple of π\piπ, S(x)S(x)S(x) is a conditionally convergent series of real numbers. This means that for any such xxx, we can rearrange the terms to make the series sum to any real number we choose.

The case of x=0x=0x=0 is a clear illustration of this. The series becomes ∑(−1)nn\sum \frac{(-1)^n}{n}∑n(−1)n​, which is the alternating harmonic series (up to a sign). As we know, its positive and negative sub-series diverge, and Riemann's theorem applies in full force. It might seem that for other values of xxx, the oscillating cos⁡(nx)\cos(nx)cos(nx) term would tame the series, but it does not. The interplay between (−1)n(-1)^n(−1)n and the changing sign of cos⁡(nx)\cos(nx)cos(nx) is complex, but the core condition for the rearrangement theorem—the divergence of the positive-term and negative-term subseries—holds true.

This example highlights a crucial point: when applying the theorem to a series of functions, one must check the convergence conditions point-by-point. The ability to rearrange the sum is not a property of the function series as a whole, but a property of the real number series generated at each specific point xxx. The Rearrangement Theorem reveals a landscape of infinite possibilities that exists at nearly every point in the domain of this function.