try ai
Popular Science
Edit
Share
Feedback
  • Rearrangements of Series

Rearrangements of Series

SciencePediaSciencePedia
Key Takeaways
  • The commutative law of addition for finite sums does not necessarily hold for infinite series; rearranging terms can fundamentally alter the sum.
  • A series is absolutely convergent if its sum is immune to rearrangement, whereas it is conditionally convergent if its sum can be changed by reordering its terms.
  • The Riemann Rearrangement Theorem states that a conditionally convergent series can be reordered to sum to any desired real number or to diverge.
  • For series of vectors, the set of all possible rearranged sums is not arbitrary but forms a geometric structure, such as a line, as described by the Lévy-Steinitz theorem.

Introduction

The familiar rules of arithmetic, like the commutative law of addition, provide a stable foundation for our mathematical intuition. We instinctively believe that the order in which we add numbers does not affect the outcome. However, when we leap from the finite to the infinite, this bedrock of certainty can crumble in spectacular fashion. This article addresses a profound and counter-intuitive phenomenon: the ability to change the sum of an infinite series simply by shuffling the order of its terms.

This exploration will guide you through the strange and beautiful landscape where the order of operations becomes a creative tool. In the "Principles and Mechanisms" chapter, we will dismantle our finite-world intuition by dissecting the crucial difference between absolute and conditional convergence. You will learn the mechanics behind this behavior and be introduced to the astonishing Riemann Rearrangement Theorem, which reveals the true power held within certain series. Following this, the "Applications and Interdisciplinary Connections" chapter will demonstrate how to harness this power, constructing new sums by design and extending these concepts from the number line to higher-dimensional vector spaces and even the abstract realm of functions, revealing deep connections across different fields of mathematics.

Principles and Mechanisms

In our journey from the finite to the infinite, we often carry with us baggage from the world we know—the comfortable, reliable rules of arithmetic. We learn in school that a+b=b+aa+b = b+aa+b=b+a, and that it doesn't matter whether you compute (2+3)+4(2+3)+4(2+3)+4 or 2+(3+4)2+(3+4)2+(3+4); the answer is the same. These are the commutative and associative laws, the bedrock of our numerical intuition. But the leap to an infinite number of additions—the world of infinite series—is a leap into a strange new territory where familiar signposts can mislead. Here, we'll dismantle our old intuition to build a newer, more powerful one, revealing that the order of addition can, astonishingly, become a matter of profound consequence.

The Commutative Law's Last Stand

Let's begin with a famous mathematical puzzle. Consider the alternating harmonic series, a beautiful and simple series that calculus students learn converges to a specific, well-known value: the natural logarithm of 2.

S=1−12+13−14+15−16+⋯=ln⁡(2)≈0.693S = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots = \ln(2) \approx 0.693S=1−21​+31​−41​+51​−61​+⋯=ln(2)≈0.693

Now, what if we simply rearrange the terms? After all, we're adding up the same numbers, just in a different order. Our intuition, forged by finite sums, screams that the result must be the same. Let's try it. Suppose we take one positive term, followed by two negative terms, and repeat this pattern. We're using all the same terms, eventually. The new series looks like this:

Snew=(1−12−14)+(13−16−18)+(15−110−112)+⋯S_{\text{new}} = \left(1 - \frac{1}{2} - \frac{1}{4}\right) + \left(\frac{1}{3} - \frac{1}{6} - \frac{1}{8}\right) + \left(\frac{1}{5} - \frac{1}{10} - \frac{1}{12}\right) + \cdotsSnew​=(1−21​−41​)+(31​−61​−81​)+(51​−101​−121​)+⋯

A bit of simple algebra within the parentheses simplifies this nicely: (1−12)−14=12−14(1-\frac{1}{2}) - \frac{1}{4} = \frac{1}{2} - \frac{1}{4}(1−21​)−41​=21​−41​ (13−16)−18=16−18(\frac{1}{3}-\frac{1}{6}) - \frac{1}{8} = \frac{1}{6} - \frac{1}{8}(31​−61​)−81​=61​−81​ (15−110)−112=110−112(\frac{1}{5}-\frac{1}{10}) - \frac{1}{12} = \frac{1}{10} - \frac{1}{12}(51​−101​)−121​=101​−121​

Putting it all together, our rearranged series becomes:

Snew=(12−14)+(16−18)+(110−112)+⋯S_{\text{new}} = \left(\frac{1}{2} - \frac{1}{4}\right) + \left(\frac{1}{6} - \frac{1}{8}\right) + \left(\frac{1}{10} - \frac{1}{12}\right) + \cdotsSnew​=(21​−41​)+(61​−81​)+(101​−121​)+⋯

If we factor out 12\frac{1}{2}21​, we uncover something remarkable:

Snew=12(1−12+13−14+15−16+⋯ )=12SS_{\text{new}} = \frac{1}{2} \left(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots \right) = \frac{1}{2} SSnew​=21​(1−21​+31​−41​+51​−61​+⋯)=21​S

We've arrived at a shocking conclusion. By merely rearranging the order of the terms, we've cut the sum in half! Snew=12ln⁡(2)S_{\text{new}} = \frac{1}{2}\ln(2)Snew​=21​ln(2). This isn't a trick; it's a demonstration of a deep truth. The fundamental error in thinking this is a paradox is the very first assumption we made: that the commutative property of addition for a finite number of terms automatically extends to an infinite number of terms. It doesn't.. This single, stunning example shatters our old intuition and forces us to ask: When does order matter, and when does it not?

A Tale of Two Convergences: Absolute vs. Conditional

The answer lies in a crucial distinction between two ways an infinite series can converge. Think of it as the difference between being robustly stable and being delicately balanced.

On one hand, we have ​​absolutely convergent​​ series. A series ∑an\sum a_n∑an​ is called absolutely convergent if the series formed by taking the absolute value of every term, ∑∣an∣\sum |a_n|∑∣an​∣, also converges. These are the "well-behaved" series of the infinite world. Take a simple geometric series like ∑n=0∞(12)n=1+12+14+⋯=2\sum_{n=0}^{\infty} (\frac{1}{2})^n = 1 + \frac{1}{2} + \frac{1}{4} + \cdots = 2∑n=0∞​(21​)n=1+21​+41​+⋯=2. The series of absolute values is the same series, which converges. Or consider an alternating version, ∑n=0∞(−12)n=1−12+14−⋯=23\sum_{n=0}^{\infty} (-\frac{1}{2})^n = 1 - \frac{1}{2} + \frac{1}{4} - \cdots = \frac{2}{3}∑n=0∞​(−21​)n=1−21​+41​−⋯=32​. The series of absolute values is ∑∣(−12)n∣=∑(12)n\sum \left|\left(-\frac{1}{2}\right)^n\right| = \sum (\frac{1}{2})^n∑​(−21​)n​=∑(21​)n, which we already know converges. For such absolutely convergent series, a wonderful theorem holds: any rearrangement of the terms results in a series that converges to the exact same sum. The sum is unshakeable, an intrinsic property of the terms themselves, not the order in which they are presented.

On the other hand, we have ​​conditionally convergent​​ series. These are the subtle, fascinating actors on our stage. A series is conditionally convergent if it converges as written, but the series of its absolute values diverges. Our friend, the alternating harmonic series, is the canonical example. We saw it converges to ln⁡(2)\ln(2)ln(2). But what about the series of its absolute values?

∑n=1∞∣(−1)n+1n∣=∑n=1∞1n=1+12+13+14+⋯\sum_{n=1}^{\infty} \left|\frac{(-1)^{n+1}}{n}\right| = \sum_{n=1}^{\infty} \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots∑n=1∞​​n(−1)n+1​​=∑n=1∞​n1​=1+21​+31​+41​+⋯

This is the famous harmonic series, and it diverges—its sum grows without bound. Therefore, the alternating harmonic series is conditionally convergent. And it is precisely this class of series for which rearranging the terms can change the sum.

This distinction is not some esoteric curiosity. It forms a sharp dividing line. Consider the family of series Sp=∑n=1∞(−1)n+1npS_p = \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^p}Sp​=∑n=1∞​np(−1)n+1​. For any p>0p>0p>0, the series converges by the alternating series test. However, the series of absolute values is the p-series ∑1np\sum \frac{1}{n^p}∑np1​, which is known to converge only if p>1p>1p>1.

  • If p>1p>1p>1, the series is absolutely convergent. All rearrangements yield the same sum.
  • If 0p≤10 p \le 10p≤1, the series is conditionally convergent. The door is now open to rearrangements that change the sum.

The existence of a rearrangement that alters the sum is the litmus test for conditional convergence.

The Secret of the Infinite Wallets

Why does this dichotomy exist? What is the physical, or at least intuitive, mechanism behind this strange behavior? Let's use an analogy. Imagine you have two magical wallets. One wallet, the "Positive Wallet," is filled with an infinite supply of money in denominations corresponding to all the positive terms of a series. The other, the "Negative Wallet," contains an infinite supply of debts corresponding to all the negative terms.

For an ​​absolutely convergent​​ series, even though there might be infinitely many terms, the total value in the Positive Wallet is finite, and the total debt in the Negative Wallet is finite. For example, in the series 1−14+19−116+⋯1 - \frac{1}{4} + \frac{1}{9} - \frac{1}{16} + \cdots1−41​+91​−161​+⋯, the sum of positive terms 1+19+125+⋯1 + \frac{1}{9} + \frac{1}{25} + \cdots1+91​+251​+⋯ converges, and the sum of the absolute values of the negative terms 14+116+136+⋯\frac{1}{4} + \frac{1}{16} + \frac{1}{36} + \cdots41​+161​+361​+⋯ also converges. No matter what order you take out the money and the debts, the final balance is fixed. You simply can't manufacture a new total because your resources, though drawn from infinitely many bills, are finite in value. If the sum of positives converges, so must the sum of negatives (for the original series to converge), which forces absolute convergence and locks in the sum for all rearrangements.

Now, for a ​​conditionally convergent​​ series, the situation is spectacularly different. The total value in the Positive Wallet is infinite, and the total debt in the Negative Wallet is also infinite! In the alternating harmonic series, the sum of positive terms is 1+13+15+⋯1 + \frac{1}{3} + \frac{1}{5} + \cdots1+31​+51​+⋯, which diverges to +∞+\infty+∞. The sum of negative terms is −12−14−16−⋯-\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots−21​−41​−61​−⋯, which diverges to −∞-\infty−∞. The only reason the original series converges to ln⁡(2)\ln(2)ln(2) is due to a delicate, lock-step cancellation between the positive and negative terms as they are laid out in their original order. You have two infinite forces pushing in opposite directions, and their carefully orchestrated dance results in them settling at a finite position.

The Grand Rearrangement: Playing God with Sums

Once you realize you have infinite reservoirs of both positive and negative terms, you are no longer a mere observer of the sum; you are its conductor. This is the essence of the ​​Riemann Rearrangement Theorem​​, one of the most astonishing results in analysis. It states that if a series is conditionally convergent, you can rearrange its terms to make the new series sum to any real number you desire.

Want the alternating harmonic series to sum to, say, the number 100? Easy. Start by pulling terms from your Positive Wallet (1,1/3,1/5,…1, 1/3, 1/5, \dots1,1/3,1/5,…) and adding them up until your partial sum just exceeds 100. Since the positive sum diverges, you are guaranteed to get there. Now, your sum is a little over 100. So, you turn to your Negative Wallet (−1/2,−1/4,…-1/2, -1/4, \dots−1/2,−1/4,…) and start adding those terms until the partial sum just dips below 100. You're guaranteed to get there, too, because the negative sum diverges to −∞-\infty−∞. Then you switch back to the Positive Wallet until you just pass 100 again, then back to the Negative...

What prevents this from just bouncing around 100 forever? Here's the final crucial ingredient: for any convergent series (even a rearranged one), the terms themselves must eventually approach zero. So, the amounts you are adding and subtracting in each step are getting smaller and smaller. Your overshoots and undershoots of 100 become progressively tinier, squeezing the partial sums to converge precisely to 100.

You could have chosen any target: π\piπ, −42-42−42, or a billion. You can even rearrange the terms to make the new series diverge to +∞+\infty+∞ (by being more generous with the positive terms) or −∞-\infty−∞. For a conditionally convergent series like ∑n=2∞(−1)nln⁡n\sum_{n=2}^{\infty} \frac{(-1)^n}{\ln n}∑n=2∞​lnn(−1)n​, the set of all possible sums is the entire real line, R\mathbb{R}R. The order of summation is not just a detail; it's a creative tool of infinite power.

The Finer Art of Shuffling

This newfound power to rearrange must be understood with a bit of subtlety. For instance, can we construct exotic behaviors, like a rearranged series whose partial sums oscillate forever between, say, -1 and 1? Could we create a sum with exactly two limit points? The answer, perhaps surprisingly, is no. The fact that the terms ana_nan​ go to zero means the steps the partial sum takes get infinitesimally small. If the partial sums are destined to visit the neighborhood of -1 and the neighborhood of 1 infinitely often, they must necessarily pass through every value in between infinitely often as well. The set of limit points of a bounded rearranged series cannot be two discrete points; it must be a continuous closed interval [L,U][L, U][L,U]. The steps become so small that they "smear out" what would be jumps into a continuous path.

Furthermore, not all rearrangements are created equal. The Riemann Rearrangement Theorem implicitly assumes you have the freedom to move any term from its original position to any other new position, no matter how far. Some "gentle" rearrangements are not powerful enough to change the sum, even for a conditionally convergent series. Consider a rearrangement where you simply swap every pair of adjacent terms: a1,a2,a3,a4,…a_1, a_2, a_3, a_4, \dotsa1​,a2​,a3​,a4​,… becomes a2,a1,a4,a3,…a_2, a_1, a_4, a_3, \dotsa2​,a1​,a4​,a3​,…. It turns out that if the original series converges, this new "swapped" series will also converge, and to the same sum. This is because the change to the partial sums at any step is bounded and tends to zero. Such "bounded displacement" permutations don't mix the terms enough to tap into the infinite wallets in a fundamentally new way.

The study of infinite series teaches us a valuable lesson. The infinite is not just a very large finite. It is a different realm with different rules. By letting go of our comfortable, finite-world intuitions, we discover a richer, stranger, and more beautiful mathematical reality, where a sequence of numbers can contain within it the potential to become anything we choose.

Applications and Interdisciplinary Connections

In our previous discussion, we stumbled upon a rather shocking secret of the infinite: the order in which you add up the terms of some series can dramatically change the final answer. For a physicist, or indeed anyone accustomed to the orderly rules of finance and everyday arithmetic, this might seem like a bug, a flaw in the fabric of mathematics. But what if it’s not a bug? What if it’s a feature? This chapter is a journey into the world where this 'flaw' becomes a powerful tool, a source of surprising structures, and a bridge connecting different fields of mathematics. We are about to become masters of mathematical shuffling.

The true magic behind the Riemann Rearrangement Theorem is not just that the sum can change, but that we can make it anything we want. Imagine you have a destination in mind, say the number 2. How do you get there by rearranging the alternating harmonic series? The strategy is delightfully simple and reveals the deep mechanics at play. You start your journey at zero. Then, you start picking out only the positive terms — 111, 13\frac{1}{3}31​, 15\frac{1}{5}51​, and so on — and add them up until your sum just overshoots your target of 2. Now you've gone too far. So, you switch tactics. You start picking out the negative terms — −12-\frac{1}{2}−21​, −14-\frac{1}{4}−41​, −16-\frac{1}{6}−61​ — and add them until your sum just dips below 2. You've undershot it. But don't worry! You just switch back to adding positive terms until you creep past 2 again. You repeat this dance, overshooting and undershooting, back and forth across your target. Why does this work? Because the individual terms you are adding are getting smaller and smaller, heading towards zero. Your overshoots and undershoots get progressively tinier, hugging your target value more and more tightly. Eventually, your sequence of sums is squeezed into converging precisely to 2. You could have picked any number—π\piπ, −1000-1000−1000, or your favorite number—and this very same strategy would have worked. The only prerequisite is that the series of positive terms alone, and the series of negative terms alone, must both diverge. This gives you an infinite supply of 'fuel' to travel as far as you want in either direction.

This isn't just a thought experiment. Let's see what happens with some specific, artfully designed rearrangements of our old friend, the alternating harmonic series ∑(−1)n+1n\sum \frac{(-1)^{n+1}}{n}∑n(−1)n+1​, whose 'natural' sum is ln⁡(2)\ln(2)ln(2). What if we decide to be more optimistic and take two positive terms for every one negative term? A pattern like (1+13)−12+(15+17)−14+…\left(1 + \frac{1}{3}\right) - \frac{1}{2} + \left(\frac{1}{5} + \frac{1}{7}\right) - \frac{1}{4} + \dots(1+31​)−21​+(51​+71​)−41​+…. A careful calculation reveals that this new series sums not to ln⁡(2)\ln(2)ln(2), but to 32ln⁡(2)\frac{3}{2}\ln(2)23​ln(2). We've increased the sum by exactly half its original value! Or what if we try another pattern: one positive term followed by two negative ones? Something like 1−12−14+13−16−18+…1 - \frac{1}{2} - \frac{1}{4} + \frac{1}{3} - \frac{1}{6} - \frac{1}{8} + \dots1−21​−41​+31​−61​−81​+…. The universe obliges, and the new sum is exactly 12ln⁡(2)\frac{1}{2}\ln(2)21​ln(2), half of the original. By simply adjusting the 'rhythm' of our additions—the ratio of positive to negative terms we pick—we can tune the final sum. These are not random outcomes; they are the direct, calculable consequences of our rearrangement choices.

At this point, you might be feeling a bit of mathematical vertigo. Does any shuffling of terms lead to this chaos? If you shuffled a deck of cards, you'd expect the same 52 cards to be there in the end. Is there any notion of 'conservation' for infinite series? The answer is a resounding 'yes', and it's just as profound as the chaos itself. The key to changing a series' sum lies in the ability to perform long-range swaps. Imagine a permutation where you're only allowed to move a term, say ana_nan​, to a new position σ(n)\sigma(n)σ(n) that is, at most, a fixed number of spots away, such that ∣n−σ(n)∣≤M|n - \sigma(n)| \le M∣n−σ(n)∣≤M for some constant MMM. This is called a 'bounded displacement permutation'. What happens if you apply such a 'local' shuffling to a conditionally convergent series? Astonishingly, nothing happens to the sum! The rearranged series will converge, and it will converge to the very same sum as the original series. This tells us something crucial: the Riemann Rearrangement Theorem works because it dips into the infinite tail of the series, pulling terms from arbitrarily far away to precisely construct the new sum. Chaos has its rules, and one of them is that to truly change the outcome, you must have the freedom to reorganize on a global, not just a local, scale.

Now, let's play the role of a composer and mix two different kinds of musical themes. What happens when we create a new series by adding, term by term, a steadfastly stable absolutely convergent series ∑an\sum a_n∑an​ to a flexibly fickle conditionally convergent series ∑bn\sum b_n∑bn​? Let the sum of the absolute series be SaS_aSa​. No matter how you rearrange it, its sum is always SaS_aSa​. It's like a rock, an anchor. The conditional series ∑bn\sum b_n∑bn​, on the other hand, is a kite that can be flown to any height in the sky. When we add them together, term-by-term, to get ∑(an+bn)\sum(a_n + b_n)∑(an​+bn​) and start rearranging this new series, a beautiful thing happens. The ∑an\sum a_n∑an​ part, being stalwart, will always contribute SaS_aSa​ to the final tally, no matter the shuffling. The ∑bn\sum b_n∑bn​ part, however, can be steered by our rearrangement to sum to any real number, let's call it LLL. The result? The sum of the rearranged combined series will be Sa+LS_a + LSa​+L. Since LLL can be any real number, the set of all possible sums for the combined series is the entire real number line, R\mathbb{R}R!. What's more, since the conditional series can also be rearranged to diverge to +∞+\infty+∞ or −∞-\infty−∞, the set of all possible limits for the combined series is the entire extended real line, R∪{−∞,∞}\mathbb{R} \cup \{-\infty, \infty\}R∪{−∞,∞}. The absolute series simply shifts the entire landscape of possibilities by a fixed amount.

So far, our playground has been the one-dimensional number line. It's natural to ask: what happens if the terms of our series are not just numbers, but vectors in a plane, or complex numbers? Does the chaos take over completely? Let's consider a series of vectors in a 2D plane, ∑v⃗n\sum \vec{v}_n∑vn​, where each v⃗n\vec{v}_nvn​ has components that form a conditionally convergent series. For instance, we could have v⃗n=((−1)n+1n,(−1)n+12n−1)\vec{v}_n = (\frac{(-1)^{n+1}}{n}, \frac{(-1)^{n+1}}{2n-1})vn​=(n(−1)n+1​,2n−1(−1)n+1​). We can rearrange the sequence of vectors v⃗n\vec{v}_nvn​ and ask what set of points in the plane can be the sum. One might guess that we could reach any point in the plane. That would be the full generalization of Riemann's theorem. But reality is more subtle and, frankly, more beautiful. The set of all possible sums is not the entire plane. It's a straight line! This is a result of the marvelous Lévy-Steinitz theorem. Why a line? Intuitively, while the series might be 'flexible' in most directions, there might be a special direction in the plane along which the series behaves more tamely, almost like an absolutely convergent series. This 'direction of stability' acts as a constraint. The possible sums can wander freely, but only along a path that respects this constraint. The set of all possible sums forms an affine subspace—a point, a line, or the entire plane.

The geometry of the outcome is elegantly tied to the structure of these directions of stability. We can define a set VVV of all 'direction vectors' www for which the projection of our series terms onto that direction, Re(wˉzn)\text{Re}(\bar{w}z_n)Re(wˉzn​) for complex terms znz_nzn​, forms an absolutely convergent series. This set VVV is a vector subspace of the plane.

  • If the original series was absolutely convergent to begin with, it's stable in all directions. VVV is the entire plane, and the set of sums S\mathcal{S}S is a single point (the original sum). The dimension of S\mathcal{S}S is 2−dim⁡(V)=2−2=02 - \dim(V) = 2 - 2 = 02−dim(V)=2−2=0.
  • If there is exactly one direction of stability (like in our example), VVV is a line. The set of sums S\mathcal{S}S is also a line, orthogonal to the direction of stability. The dimension of S\mathcal{S}S is 2−1=12 - 1 = 12−1=1.
  • If there are no directions of stability—if the series is wildly conditional in every direction—then VVV is just the origin. The set of sums S\mathcal{S}S is then the entire plane! The dimension of S\mathcal{S}S is 2−0=22 - 0 = 22−0=2. This connection between the analytic properties of convergence and the geometric shape of the set of sums is a spectacular example of the unity of mathematics.

Our journey doesn't end with vectors in a plane. The principles of rearrangement extend into even more abstract realms, such as the infinite-dimensional spaces of functions. Imagine a series where each term is not a number, but a continuous function fn(x)f_n(x)fn​(x) on an interval. If this series of functions is pointwise conditionally convergent, one can ask what kind of new sum functions G(x)G(x)G(x) can be created by a single rearrangement of the terms. It turns out that the power to rearrange can be used to do some truly strange things. For example, even if every single function fn(x)f_n(x)fn​(x) in the series is perfectly smooth and continuous, it is possible to find a rearrangement σ\sigmaσ such that the new sum function G(x)=∑fσ(n)(x)G(x) = \sum f_{\sigma(n)}(x)G(x)=∑fσ(n)​(x) is discontinuous at some points. The act of reordering the summation can shatter the smoothness of the result. This serves as a gateway to the deep and often counter-intuitive world of functional analysis, where our notions of summation and convergence are put to the ultimate test. The simple act of shuffling an infinite list of numbers has led us from arithmetic curiosities to the geometric structure of higher-dimensional spaces, and finally to the very nature of functions themselves. The 'flaw' of conditional convergence, it turns out, is a doorway to a richer and more intricate mathematical universe.