try ai
Popular Science
Edit
Share
Feedback
  • Abel's formula

Abel's formula

SciencePediaSciencePedia
Key Takeaways
  • Abel's theorem on power series rigorously justifies evaluating a function at the edge of its interval of convergence if the series converges at that point.
  • Abel's identity for differential equations provides a simple formula for the Wronskian of solutions, depending only on a single coefficient from the original equation.
  • Abel's summation formula, the discrete equivalent of integration by parts, transforms difficult sums into more manageable integrals, crucial for fields like analytic number theory.
  • Collectively, these formulas create powerful bridges between the worlds of discrete mathematics (sums, sequences) and continuous mathematics (functions, integrals).

Introduction

Niels Henrik Abel, a brilliant Norwegian mathematician, left behind a legacy of profound insights often collected under the single name "Abel's formula." This term, however, refers not to a single equation but to a family of powerful ideas that build bridges between two seemingly disparate worlds: the discrete realm of individual steps, sums, and sequences, and the continuous landscape of smooth functions, derivatives, and integrals. This article addresses the fundamental challenge of relating these two domains, showing how Abel's work provides the keys to unlock problems that lie at their intersection. Across the following sections, you will discover the core principles behind these mathematical tools and witness their far-reaching applications. The first section, "Principles and Mechanisms," will deconstruct Abel's theorems for power series and differential equations, as well as his summation formula. Subsequently, "Applications and Interdisciplinary Connections" will demonstrate how these ideas are used to calculate fundamental constants, understand physical systems, and solve complex problems in mathematics and science.

Principles and Mechanisms

It often happens in physics and mathematics that a single, powerful idea acts like a master key, unlocking doors in rooms that, at first glance, seem to have no connection to one another. The work of the brilliant Norwegian mathematician Niels Henrik Abel provides us with a whole ring of such keys, often collected under the single name "Abel's formula." But this name doesn't refer to just one equation; it points to a family of profound insights that build bridges between two seemingly different worlds: the world of the ​​discrete​​, made of individual steps, sums, and sequences, and the world of the ​​continuous​​, the smooth landscape of functions, derivatives, and integrals.

To truly appreciate the beauty of Abel's work, we must explore these bridges one by one. We will see how they allow us to perform seemingly impossible feats, like summing an infinite series to find a precise number, understanding the collective behavior of solutions to an equation we can't even solve, and transforming a cumbersome sum into a manageable integral.

The Bridge at the Edge of Infinity: Abel's Theorem for Power Series

Let’s begin with an idea you might have encountered: the power series. You can think of a power series as a sort of "infinite polynomial," a sum of terms with ever-increasing powers of a variable xxx, like a0+a1x+a2x2+…a_0 + a_1x + a_2x^2 + \dotsa0​+a1​x+a2​x2+…. For many functions, we can find a power series that perfectly represents the function, at least within a certain range of xxx values, called the ​​interval of convergence​​. For example, the function f(x)=ln⁡(1+x)f(x) = \ln(1+x)f(x)=ln(1+x) can be written as:

ln⁡(1+x)=x−x22+x33−x44+⋯=∑n=1∞(−1)n−1nxn\ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots = \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} x^nln(1+x)=x−2x2​+3x3​−4x4​+⋯=n=1∑∞​n(−1)n−1​xn

This series works beautifully for any xxx between −1-1−1 and 111. But a fascinating question arises: what happens right at the edge? What happens at x=1x=1x=1? If we plug x=1x=1x=1 into the series, we get the famous alternating harmonic series: 1−12+13−14+⋯1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots1−21​+31​−41​+⋯. Does this sum have any relationship to the function ln⁡(1+x)\ln(1+x)ln(1+x)?

Common sense might suggest that if the series formula works all the way up to 111, then the sum at x=1x=1x=1 should just be ln⁡(1+1)=ln⁡(2)\ln(1+1) = \ln(2)ln(1+1)=ln(2). This is where ​​Abel's Theorem on Power Series​​ comes in. It provides the rigorous justification for this intuitive leap. The theorem states that if a power series converges to a finite value at an endpoint of its interval of convergence, then the function represented by the series is continuous all the way to that endpoint. In simpler terms, the value the function smoothly approaches as you near the edge is precisely the value of the series at the edge.

Because we can show that the series 1−12+13−⋯1 - \frac{1}{2} + \frac{1}{3} - \cdots1−21​+31​−⋯ does indeed converge to a specific number, Abel's theorem gives us the green light. We can confidently say that its sum is exactly ln⁡(2)\ln(2)ln(2). It’s a magical result, connecting an infinite discrete sum to a simple, fundamental constant. The same principle allows us to find the value of other, more exotic sums. For instance, the value of the dilogarithm function Li2(x)=∑n=1∞xnn2\text{Li}_2(x) = \sum_{n=1}^\infty \frac{x^n}{n^2}Li2​(x)=∑n=1∞​n2xn​ at x=−1x=-1x=−1 is found by applying Abel's theorem to be Li2(−1)=∑n=1∞(−1)nn2=−π212\text{Li}_2(-1) = \sum_{n=1}^\infty \frac{(-1)^n}{n^2} = -\frac{\pi^2}{12}Li2​(−1)=∑n=1∞​n2(−1)n​=−12π2​.

But nature (and mathematics) has rules. The "if" in Abel's theorem is crucial. What if the series doesn't settle down at the boundary? Let’s consider the simple geometric series for the function f(x)=11+xf(x) = \frac{1}{1+x}f(x)=1+x1​:

11+x=1−x+x2−x3+⋯=∑n=0∞(−1)nxn\frac{1}{1+x} = 1 - x + x^2 - x^3 + \cdots = \sum_{n=0}^{\infty} (-1)^n x^n1+x1​=1−x+x2−x3+⋯=n=0∑∞​(−1)nxn

This series also converges for xxx between −1-1−1 and 111. What happens at the right endpoint, x=1x=1x=1? The function value is clear: lim⁡x→1−f(x)=11+1=12\lim_{x \to 1^-} f(x) = \frac{1}{1+1} = \frac{1}{2}limx→1−​f(x)=1+11​=21​. But the series becomes 1−1+1−1+⋯1 - 1 + 1 - 1 + \cdots1−1+1−1+⋯, which does not converge; its partial sums just bounce back and forth between 111 and 000. Since the crucial hypothesis of Abel's theorem—convergence at the endpoint—is not met, the theorem simply doesn't apply. There's no guarantee that the function's limit and the series' behavior should match, and indeed they don't. The theorem is not wrong; its conditions were just not satisfied.

We see an even more dramatic failure with the series for −ln⁡(1−x)-\ln(1-x)−ln(1−x), which is ∑n=1∞xnn\sum_{n=1}^{\infty} \frac{x^n}{n}∑n=1∞​nxn​. At the endpoint x=1x=1x=1, this becomes the harmonic series 1+12+13+⋯1 + \frac{1}{2} + \frac{1}{3} + \cdots1+21​+31​+⋯, which famously diverges to infinity. As you would expect, the function −ln⁡(1−x)-\ln(1-x)−ln(1−x) also goes to infinity as xxx approaches 111. Again, Abel's theorem cannot be used because its main prerequisite is not fulfilled. Interestingly, at the other endpoint, x=−1x=-1x=−1, this same series becomes ∑n=1∞(−1)nn\sum_{n=1}^{\infty} \frac{(-1)^n}{n}∑n=1∞​n(−1)n​, which converges. And, just as Abel's theorem predicts, its sum is indeed lim⁡x→−1+(−ln⁡(1−x))=−ln⁡(2)\lim_{x \to -1^+} (-\ln(1-x)) = -\ln(2)limx→−1+​(−ln(1−x))=−ln(2). This pair of examples on a single function beautifully illustrates the precise conditions under which this bridge between the continuous function and its discrete series stands firm.

To use the bridge, we must first check its foundation. That is, to apply Abel's theorem, we first need tools to check if the series converges at the endpoint. This is where standard convergence tests, like the Alternating Series Test, become essential practical tools in our kit.

The Hidden Symphony of Solutions: Abel's Identity for Differential Equations

Abel's explorations were not confined to infinite series. He also discovered a remarkable property hidden within the equations that govern countless physical systems, from the swing of a pendulum to the vibrations of a MEMS gyroscope or the fields of quantum mechanics. These systems are often described by ​​second-order linear homogeneous differential equations​​, which look like this:

y′′+p(t)y′+q(t)y=0y'' + p(t) y' + q(t) y = 0y′′+p(t)y′+q(t)y=0

To fully describe the system, we need to find two fundamentally different solutions, y1(t)y_1(t)y1​(t) and y2(t)y_2(t)y2​(t), which are called a ​​fundamental set​​. "Fundamentally different" here has a precise meaning: one cannot be a constant multiple of the other. The tool to measure this independence is the ​​Wronskian​​, defined as W(t)=y1(t)y2′(t)−y1′(t)y2(t)W(t) = y_1(t)y'_2(t) - y'_1(t)y_2(t)W(t)=y1​(t)y2′​(t)−y1′​(t)y2​(t). If the Wronskian is non-zero, the solutions are truly independent and can be combined to form any possible solution.

You might think that to find the Wronskian, you first need to go through the hard work of solving the equation to find y1y_1y1​ and y2y_2y2​. But here is the magic of ​​Abel's Identity​​: you don't! The identity reveals that the Wronskian follows a simple, predictable formula that depends only on the function p(t)p(t)p(t) from the original equation:

W(t)=C⋅exp⁡(−∫p(t)dt)W(t) = C \cdot \exp\left(-\int p(t) dt\right)W(t)=C⋅exp(−∫p(t)dt)

where CCC is a constant that depends on which two solutions you picked. This is astonishing. It’s like knowing the total energy of a complex system without knowing the position or velocity of any single particle. The collective behavior of the solutions—their "independence measure"—is governed by a simple, global property of the equation itself.

For example, for the Legendre equation, (1−t2)y′′−2ty′+λy=0(1-t^2)y'' - 2ty' + \lambda y = 0(1−t2)y′′−2ty′+λy=0, we can rewrite it to find that p(t)=−2t1−t2p(t) = -\frac{2t}{1-t^2}p(t)=−1−t22t​. Without knowing anything about its complicated solutions (which are Legendre polynomials), we can immediately use Abel's identity to find that their Wronskian must have the form W(t)=C1−t2W(t) = \frac{C}{1-t^2}W(t)=1−t2C​. This tells us a tremendous amount about the solutions' behavior, especially near t=1t=1t=1 and t=−1t=-1t=−1, without ever solving the equation.

This identity is not just an intellectual curiosity; it's a powerful practical tool. Suppose by some stroke of luck you've found one solution, y1(t)y_1(t)y1​(t), to the differential equation. How do you find a second, independent one? The method of ​​reduction of order​​ provides a systematic answer, and it can be derived directly from Abel's identity. By knowing the form of the Wronskian from Abel's identity, W(t)=y12(y2/y1)′W(t) = y_1^2 (y_2/y_1)'W(t)=y12​(y2​/y1​)′, we can set up a first-order differential equation for the second solution and solve it. Abel's identity essentially gives us a recipe to turn one known solution into a complete fundamental set. It's a beautiful piece of mathematical engineering, turning a lucky guess into a robust algorithm.

The Accountant's Trick: Abel's Summation Formula

The final tool from Abel's collection that we'll examine is perhaps the most versatile. It is another bridge between the discrete and continuous, known as ​​Abel's summation formula​​ or ​​partial summation​​. It is the discrete analog of the "integration by parts" technique from calculus.

Suppose you have a sum of products, ∑anbn\sum a_n b_n∑an​bn​. This might be a difficult sum to calculate directly. Abel's formula provides an alternative way to compute or estimate it by transforming it into a more manageable form involving an integral. The formula states:

∑n=1Nanb(n)=A(N)b(N)−∫1NA(t)b′(t)dt\sum_{n=1}^{N} a_n b(n) = A(N)b(N) - \int_{1}^{N} A(t) b'(t) dtn=1∑N​an​b(n)=A(N)b(N)−∫1N​A(t)b′(t)dt

Here, the ana_nan​ are the terms you are summing, and A(t)=∑n≤tanA(t) = \sum_{n \le t} a_nA(t)=∑n≤t​an​ is their running total (or "summatory function"). The b(n)b(n)b(n) can be thought of as a set of smooth weights applied to each term. The formula says that the total weighted sum can be found by taking the final total number of items, A(N)A(N)A(N), multiplied by the final weight, b(N)b(N)b(N), and then subtracting a correction term. This correction term is an integral that accounts for how the running total A(t)A(t)A(t) interacts with the rate of change of the weights, b′(t)b'(t)b′(t).

Think of it like an accountant's trick. Imagine you are collecting items (ana_nan​) over NNN days. Each day, the items you collect have a certain monetary value (b(n)b(n)b(n)), and this value changes from day to day. To find the total value of everything you've collected, you can't just multiply the total number of items by the final day's value. Abel's formula provides the correct way to do it. The term A(N)b(N)A(N)b(N)A(N)b(N) is a first guess, and the integral ∫A(t)b′(t)dt\int A(t) b'(t) dt∫A(t)b′(t)dt is the precise correction needed to account for the fact that items collected on earlier days were valued differently.

This "accountant's trick" is indispensable in fields like analytic number theory, where mathematicians study the distribution of prime numbers. Sums involving primes are often erratic and difficult to handle. By converting them into integrals using Abel's summation formula, they can be analyzed with the powerful tools of calculus, turning intractable discrete problems into solvable continuous ones.

From the edge of an infinite series to the inner symphony of differential equations, and to the clever accounting of discrete sums, Abel's formulas are a testament to a unified mathematical vision. They are not just isolated tricks, but expressions of a deep and beautiful connection between the discrete and the continuous—a connection that continues to provide profound insights into the structure of both mathematics and the physical world.

Applications and Interdisciplinary Connections

After exploring the foundational principles of Niels Henrik Abel's work, we arrive at a thrilling destination: the real world, in all its mathematical and physical splendor. To speak of "Abel's formula" is to speak of not one, but at least two profound insights that have rippled through science. One gives us a magical lens to view the slippery nature of infinite sums, while the other reveals a hidden conservation law governing the universe of solutions to differential equations. Like two faces of a masterfully cut gem, each reflects a different light, yet together they illuminate the deep, unified structure of mathematics. Let us embark on a journey to see how these ideas empower us to solve problems that were once intractable, from calculating fundamental constants to understanding the behavior of physical systems.

The Art of Summation: Connecting the Interior to the Edge

Imagine you have a beautiful, smooth function defined by a power series, like f(x)=∑anxnf(x) = \sum a_n x^nf(x)=∑an​xn. This formula works perfectly within some "interval of convergence," say for all xxx between −1-1−1 and 111. But what happens precisely at the boundary? Can we just plug in x=1x=1x=1 and trust the result? This is not a trivial question; infinity is a tricky business. Abel's theorem on power series provides the master key. It tells us, with breathtaking elegance, that if the series of coefficients ∑an\sum a_n∑an​ itself converges to a value SSS, then the function f(x)f(x)f(x) will glide smoothly towards that same value SSS as xxx approaches the boundary. The theorem forges a rigorous bridge between the continuous world inside the interval and the discrete sum waiting at its edge.

This single idea unlocks a treasure chest of results. Consider the famous Gregory-Leibniz series, 1−13+15−17+⋯1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots1−31​+51​−71​+⋯. For centuries, mathematicians knew it converged, but to what? We know that the power series ∑n=0∞(−1)nx2n+12n+1\sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{2n+1}∑n=0∞​(−1)n2n+1x2n+1​ is simply the function arctan⁡(x)\arctan(x)arctan(x) for ∣x∣<1|x| \lt 1∣x∣<1. The series we want to sum is what we get by boldly plugging in x=1x=1x=1. Because the series at x=1x=1x=1 converges (by the alternating series test), Abel's theorem gives us permission to do just that. The sum must be equal to the limit of arctan⁡(x)\arctan(x)arctan(x) as xxx approaches 111, which is simply arctan⁡(1)\arctan(1)arctan(1). The answer is π4\frac{\pi}{4}4π​. A seemingly abstract theorem about power series has handed us a piece of π\piπ!

This technique is a powerful engine for computation. Many tricky definite integrals can be conquered by first expanding the integrand into a power series, integrating it term-by-term, and then using Abel's theorem to evaluate the resulting series at the boundary. For example, the integral ∫01ln⁡(1+t)tdt\int_0^1 \frac{\ln(1+t)}{t} dt∫01​tln(1+t)​dt seems daunting. But by expanding the logarithm, integrating, and applying Abel's theorem, we find the integral is equivalent to the alternating sum ∑n=1∞(−1)n+1n2\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2}∑n=1∞​n2(−1)n+1​. This famous series, related to the Riemann zeta function, evaluates to the elegant value of π212\frac{\pi^2}{12}12π2​. Similarly, the integral ∫01arctan⁡(x)xdx\int_0^1 \frac{\arctan(x)}{x} dx∫01​xarctan(x)​dx transforms into the sum ∑n=0∞(−1)n(2n+1)2\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^2}∑n=0∞​(2n+1)2(−1)n​, a value so important it has its own name: Catalan's constant, GGG. In each case, Abel's theorem is the crucial link that guarantees the final step is valid.

The reach of this idea extends far beyond pure mathematics, into the heart of physics and engineering. Consider the Dirichlet problem: you have a metal disk, and you fix the temperature along its circular boundary. What is the steady-state temperature at any point on the interior? The solution, often found using Fourier series, takes the form of a power series in the radial coordinate rrr. Abel's theorem provides the fundamental physical and mathematical guarantee that as you move from the center of the disk towards the edge (r→1−r \to 1^-r→1−), the temperature calculated by your series solution will continuously and correctly approach the fixed temperature you set at the boundary. Without this theorem, our mathematical model for heat flow would have a conceptual gap; with it, the model is complete and robust. This principle also illuminates the relationship between different methods of summation. Abel's theorem tells us that if a series converges in the ordinary sense, its Abel sum will agree. This places it within a broader family of results, including Tauberian theorems, which explore the difficult converse question: under what special conditions does Abel summability imply ordinary convergence?

The Symphony of Solutions: Abel's Law for Differential Equations

Now we turn to the second face of Abel's genius: a profound statement about differential equations. Consider a second-order linear homogeneous equation, the mathematical language of oscillators, waves, and quantum particles: y′′+P(x)y′+Q(x)y=0y'' + P(x)y' + Q(x)y = 0y′′+P(x)y′+Q(x)y=0. This equation is like a musical score, and the solutions, y(x)y(x)y(x), are the melodies that follow its rules. If we take any two distinct solutions, y1y_1y1​ and y2y_2y2​, we can form their Wronskian, W=y1y2′−y1′y2W = y_1 y_2' - y_1' y_2W=y1​y2′​−y1′​y2​, which measures their linear independence. One might expect this quantity to be terribly complicated, depending intricately on the chosen solutions.

Abel's formula for the Wronskian reveals a shocking simplicity. It states that W(x)=Cexp⁡(−∫P(x)dx)W(x) = C \exp(-\int P(x) dx)W(x)=Cexp(−∫P(x)dx), where CCC is a constant. Look closely at this result. The Wronskian's form depends only on the function P(x)P(x)P(x)—the coefficient of the y′y'y′ term! The entire Q(x)Q(x)Q(x) term, no matter how complex, has no say in the matter. This is a powerful conservation law written into the very fabric of the differential equation. For the famous Euler-Cauchy equation, x2y′′+axy′+by=0x^2y'' + axy' + by = 0x2y′′+axy′+by=0, this means the Wronskian must follow a simple power law, W(x)∝x−aW(x) \propto x^{-a}W(x)∝x−a, regardless of the value of bbb or the specific solutions we choose.

This "conservation law" becomes an indispensable guide when navigating the treacherous terrain of singular points, where the equation's coefficients blow up. Abel's formula tells us precisely how the Wronskian must behave near such a point. By examining the local behavior of P(x)P(x)P(x), we can predict the power-law dependence of the Wronskian, W(x)∼K(x−x0)αW(x) \sim K(x-x_0)^\alphaW(x)∼K(x−x0​)α, without needing to find the solutions themselves. This predictive power is on full display when dealing with the special functions of mathematical physics. For the modified Bessel equation, which appears in problems involving heat conduction in cylinders and wave propagation, Abel's formula allows us to calculate the Wronskian of its two fundamental solutions, Iν(x)I_\nu(x)Iν​(x) and Kν(x)K_\nu(x)Kν​(x), with astonishing ease. The result is simply W(x)=−1xW(x) = -\frac{1}{x}W(x)=−x1​, a compact and vital identity obtained not through brute force, but through the elegant insight of Abel's formula.

Perhaps the most subtle and beautiful application of Abel's Wronskian formula is in explaining a mysterious feature in the solutions of differential equations near regular singular points. Sometimes, the standard series solution method (the method of Frobenius) yields one solution, y1y_1y1​, but the second, independent solution, y2y_2y2​, is forced to include a peculiar logarithmic term, like y1(x)ln⁡(x)y_1(x) \ln(x)y1​(x)ln(x). Where does this logarithm come from? It is not an arbitrary trick; it is a necessity. Abel's formula dictates the exact form the Wronskian must take. If the first solution y1(x)y_1(x)y1​(x) has a certain power-law behavior, the only way for the combination y1y2′−y1′y2y_1 y_2' - y_1' y_2y1​y2′​−y1′​y2​ to satisfy Abel's law is for the second solution y2(x)y_2(x)y2​(x) to contain a logarithmic term, whose derivative introduces the precise factor of 1/x1/x1/x needed to make everything consistent. The logarithm is the "ghost in the machine," a term whose existence is demanded by the conservation law of the Wronskian.

A Unifying Thread

From summing series to find π\piπ to explaining the ghostly appearance of logarithms in physics, Abel's insights are a testament to the interconnectedness of mathematics. Both of the great theorems that bear his name are about discovering a global, simple truth from local information. The power series theorem connects the behavior of a function inside its domain to a single point on its boundary. The Wronskian formula connects a single term in a differential equation to a universal property shared by its entire infinite family of solutions. In this, we see the true nature of mathematical beauty—not as a collection of isolated tricks, but as a web of deep, unifying principles that provide a clear and powerful language to describe the world.