try ai
Popular Science
Edit
Share
Feedback
  • Jensen's Formula

Jensen's Formula

SciencePediaSciencePedia
Key Takeaways
  • Jensen's formula relates the average value of a function's log-magnitude on a circle to its value at the center, adjusted by terms corresponding to its internal zeros and poles.
  • Each zero within a disk increases the function's average magnitude on the boundary, while each pole decreases it, providing a precise measure of their influence.
  • The formula reveals that the average log-magnitude of a function on a circle is a convex function of the logarithm of the circle's radius.
  • This powerful tool has wide-ranging applications, from proving the Fundamental Theorem of Algebra to analyzing digital filters and counting the zeros of the Riemann zeta-function in number theory.

Introduction

In mathematics, some principles are so fundamental they act as a bridge between seemingly disparate concepts. Jensen's formula is one such bridge in the world of complex analysis, elegantly connecting a function's internal structure—the location of its zeros and poles—to its external behavior on a boundary. While our intuition for physical phenomena like heat distribution suggests a function's value at a central point should be the average of its boundary values, this simple picture breaks down for analytic functions in the presence of zeros. The problem Jensen's formula solves is precisely how to correct this simple average to account for these internal "disturbances."

This article embarks on a journey to fully understand this remarkable formula. We will begin by dissecting its core principles and mechanisms, starting from the simple case of a zero-free disk and building up to the complete Poisson-Jensen formula that accommodates both zeros and poles. We will also explore the dynamic relationship between a function's growth and the number of zeros it contains. Following this, we will witness the formula's power in action, exploring its diverse applications and interdisciplinary connections, from providing an elegant proof of the Fundamental Theorem of Algebra to its crucial role in modern digital signal processing and the frontiers of number theory.

Principles and Mechanisms

Imagine you are standing in the exact center of a perfectly circular room. The temperature is not uniform; it varies from point to point on the wall. If I were to ask you, "What's the temperature at your location?", you might guess it's simply the average of all the temperatures on the surrounding wall. In many physical situations, like the steady-state distribution of heat, your intuition would be spot on. This idea is known as the ​​Mean Value Property​​, and it's a cornerstone for a class of well-behaved functions called ​​harmonic functions​​.

Now, let's step into the world of complex functions. For an analytic function f(z)f(z)f(z), the logarithm of its magnitude, ln⁡∣f(z)∣\ln|f(z)|ln∣f(z)∣, is harmonic everywhere except where f(z)f(z)f(z) is zero or infinite. This gives us a beautiful starting point.

The Simplest Case: A World Without Zeros

Let's suppose our function f(z)f(z)f(z) is analytic and has no zeros inside a disk of radius RRR centered at the origin. In this pristine environment, ln⁡∣f(z)∣\ln|f(z)|ln∣f(z)∣ is perfectly harmonic. The Mean Value Property applies directly, and we arrive at a remarkably simple conclusion: the average value of ln⁡∣f(z)∣\ln|f(z)|ln∣f(z)∣ on the boundary circle is exactly its value at the center.

12π∫02πln⁡∣f(Reiθ)∣ dθ=ln⁡∣f(0)∣\frac{1}{2\pi} \int_0^{2\pi} \ln|f(Re^{i\theta})| \, d\theta = \ln|f(0)|2π1​∫02π​ln∣f(Reiθ)∣dθ=ln∣f(0)∣

This is the baseline, the law of the land in a zero-free domain. Even if a function has a zero, as long as that zero lies outside our circle of interest, this simple rule holds. For instance, if we analyze the function f(z)=(1+i)(z−3i)f(z) = (1+i)(z-3i)f(z)=(1+i)(z−3i) on a disk of radius R=2R=2R=2, the only zero is at z=3iz=3iz=3i, which is safely outside. As expected, the average of ln⁡∣f(z)∣\ln|f(z)|ln∣f(z)∣ on the circle ∣z∣=2|z|=2∣z∣=2 is simply ln⁡∣f(0)∣\ln|f(0)|ln∣f(0)∣. This is Jensen's formula in its most basic form. It tells us that in the absence of any "drama" (zeros) inside our domain, the center holds the average of the boundary.

The Disturbance of a Zero

So, what happens when we break this peaceful condition? What happens when a zero, let's call it aka_kak​, dares to cross the boundary and enter our disk, so that ∣ak∣<R|a_k| \lt R∣ak​∣<R?

The function ln⁡∣f(z)∣\ln|f(z)|ln∣f(z)∣ is no longer harmonic inside the disk because it has a singularity at z=akz=a_kz=ak​ where it plummets to −∞-\infty−∞. The simple Mean Value Property is broken. Jensen's formula is the new law that tells us precisely how it's broken. For every zero aka_kak​ inside the disk, a "correction term" is added to the equation:

12π∫02πln⁡∣f(Reiθ)∣ dθ=ln⁡∣f(0)∣+∑kln⁡(R∣ak∣)\frac{1}{2\pi} \int_0^{2\pi} \ln|f(Re^{i\theta})| \, d\theta = \ln|f(0)| + \sum_{k} \ln\left(\frac{R}{|a_k|}\right)2π1​∫02π​ln∣f(Reiθ)∣dθ=ln∣f(0)∣+k∑​ln(∣ak​∣R​)

Let's look at this new term, ln⁡(R/∣ak∣)\ln(R/|a_k|)ln(R/∣ak​∣). Since the zero is inside the disk, the ratio R/∣ak∣R/|a_k|R/∣ak​∣ is always greater than 1, which means its logarithm is always positive. This tells us something profound: ​​each zero inside the disk pulls the function's magnitude down near it, which forces the function to be, on average, larger on the boundary circle to compensate.​​ The correction term quantifies this effect. Zeros increase the average boundary value relative to the value at the center.

We can see this in action with a function as simple as f(z)=(z−a)nf(z) = (z-a)^nf(z)=(z−a)n, which has a zero of multiplicity nnn at z=az=az=a. The formula perfectly accounts for this, with the right-hand side evaluating to ln⁡∣(−a)n∣+nln⁡(R/∣a∣)=nln⁡∣a∣+n(ln⁡R−ln⁡∣a∣)=nln⁡R\ln|(-a)^n| + n \ln(R/|a|) = n\ln|a| + n(\ln R - \ln|a|) = n\ln Rln∣(−a)n∣+nln(R/∣a∣)=nln∣a∣+n(lnR−ln∣a∣)=nlnR. Calculating the integral on the left-hand side directly, through some clever use of the Mean Value Property on a different function, confirms that it also equals nln⁡Rn\ln RnlnR. The books balance. For a function with multiple zeros inside the disk, like f(z)=z2−3z+2=(z−1)(z−2)f(z) = z^2-3z+2 = (z-1)(z-2)f(z)=z2−3z+2=(z−1)(z−2) inside a circle of radius R=3R=3R=3, the contributions from each zero simply add up. The principle is additive and elegant.

Furthermore, this idea isn't tied to circles centered at the origin. If we are interested in a disk of radius RRR centered at some other point ccc, the formula simply adapts. The center value becomes ln⁡∣f(c)∣\ln|f(c)|ln∣f(c)∣ and the distances to the zeros are measured from the new center, ∣ak−c∣|a_k - c|∣ak​−c∣. The underlying principle remains the same: it's a balance between the value at the center and the locations of the zeros inside.

The Full Picture: Zeros, Poles, and the Origin

Nature loves symmetry. If a zero, a point where a function vanishes, increases the average boundary value, what would its opposite do? A ​​pole​​ is a point where a function goes to infinity. It's an "anti-zero." Intuitively, a pole should decrease the average boundary value.

This is exactly what happens. By considering a meromorphic function F(z)=f(z)/g(z)F(z) = f(z)/g(z)F(z)=f(z)/g(z), whose poles are the zeros of g(z)g(z)g(z), we can derive the formula for F(z)F(z)F(z) by simply subtracting the formula for g(z)g(z)g(z) from that of f(z)f(z)f(z). The result, known as the ​​Poisson-Jensen formula​​, is a masterpiece of symmetry:

12π∫02πln⁡∣F(Reiθ)∣dθ=ln⁡∣F(0)∣+∑kln⁡(R∣ak∣)−∑jln⁡(R∣bj∣)\frac{1}{2\pi} \int_0^{2\pi} \ln|F(Re^{i\theta})| d\theta = \ln|F(0)| + \sum_{k} \ln\left(\frac{R}{|a_k|}\right) - \sum_{j} \ln\left(\frac{R}{|b_j|}\right)2π1​∫02π​ln∣F(Reiθ)∣dθ=ln∣F(0)∣+k∑​ln(∣ak​∣R​)−j∑​ln(∣bj​∣R​)

Here, the aka_kak​ are the zeros and the bjb_jbj​ are the poles. The poles contribute with a minus sign, precisely as our intuition suggested.

There's one last special case: what if f(z)f(z)f(z) has a zero or a pole right at the origin, z=0z=0z=0? Then ln⁡∣f(0)∣\ln|f(0)|ln∣f(0)∣ is not defined. The formula needs a slight modification. We look at the first non-zero term in the function's Laurent series expansion near the origin, f(z)≈ckzkf(z) \approx c_k z^kf(z)≈ck​zk. The number kkk tells us the order of the zero (if k>0k>0k>0) or pole (if k<0k<0k<0) at the origin. The generalized formula beautifully handles this by replacing the ln⁡∣f(0)∣\ln|f(0)|ln∣f(0)∣ term with ln⁡∣ck∣+kln⁡R\ln|c_k| + k \ln Rln∣ck​∣+klnR. With this final piece, the formula provides a complete accounting system for the influence of all zeros and poles within a disk.

The Fine Print: Life on the Edge

We've repeatedly stated that the formula holds provided there are no zeros or poles on the boundary circle ∣z∣=R|z|=R∣z∣=R. Why is this rule so important? What happens if a zero lands right on the edge?

The reason is not as simple as "the formula gives infinity." The true reason lies in the mathematical machinery used to prove the formula. Derivations often rely on tools like Green's theorem, which require the functions being integrated to be well-behaved (e.g., continuous and with continuous derivatives) on the entire closed disk, including its boundary.

If a function f(z)f(z)f(z) has a zero at a point z0z_0z0​ on the circle, then ln⁡∣f(z0)∣\ln|f(z_0)|ln∣f(z0​)∣ is −∞-\infty−∞. The function ln⁡∣f(z)∣\ln|f(z)|ln∣f(z)∣ has a logarithmic singularity on the boundary. This singularity is like a pothole in the road; it violates the smoothness conditions required for the proof to work.

Here's a subtle and fascinating point, however. Does the boundary integral itself, ∫ln⁡∣f(Reiθ)∣dθ\int \ln|f(Re^{i\theta})| d\theta∫ln∣f(Reiθ)∣dθ, become infinite? Not necessarily! For a simple zero on the boundary, like in f(z)=z−Rf(z) = z-Rf(z)=z−R, the integral is improper but actually converges to a finite value. The problem isn't that the quantity we want to measure is infinite; the problem is that the beautiful, simple formula connecting it to the interior zeros breaks down. The bridge between the boundary and the interior collapses at that singular point.

A Dynamic View: The Music of the Zeros

So far, we have a static snapshot. But the true beauty of Jensen's formula is revealed when we see it in motion. Let's define M(R)\mathcal{M}(R)M(R) as the average log-magnitude on a circle of radius RRR.

M(R)=12π∫02πln⁡∣f(Reiθ)∣dθ\mathcal{M}(R) = \frac{1}{2\pi} \int_0^{2\pi} \ln|f(Re^{i\theta})| d\thetaM(R)=2π1​∫02π​ln∣f(Reiθ)∣dθ

Now, imagine slowly inflating this circle, letting the radius RRR grow. How does M(R)\mathcal{M}(R)M(R) change? By differentiating Jensen's formula with respect to RRR, we uncover a relationship of breathtaking simplicity. If we let n(R)n(R)n(R) be the number of zeros (counted with multiplicity) inside the disk of radius RRR, then:

RdMdR=n(R)R \frac{d\mathcal{M}}{dR} = n(R)RdRdM​=n(R)

This equation is extraordinary. It says that the rate at which the average log-magnitude grows (scaled by the radius) is precisely the number of zeros we have enclosed so far! As we expand our circle through an empty region of the plane, M(R)\mathcal{M}(R)M(R) grows in a predictable way. But the moment our circle's edge sweeps over a new zero, the value of n(R)n(R)n(R) jumps up by one, and the growth rate of M(R)\mathcal{M}(R)M(R) immediately increases.

We can take this one step further. If we think of the average value M\mathcal{M}M as a function of the logarithm of the radius, x=ln⁡Rx = \ln Rx=lnR, its derivative is simply dMdx=n(ex)=n(R)\frac{d\mathcal{M}}{dx} = n(e^x) = n(R)dxdM​=n(ex)=n(R). Since n(R)n(R)n(R) is a non-decreasing function (we can only enclose more zeros as RRR grows), the second derivative d2Mdx2\frac{d^2\mathcal{M}}{dx^2}dx2d2M​ must be non-negative. This means that ​​the average log-magnitude is a convex function of the log-radius.​​ Its slope only ever increases, and it does so in discrete steps, with each step corresponding to swallowing a new zero.

This dynamic picture transforms Jensen's formula from a mere accounting identity into a living principle. It paints a picture of the complex plane where the zeros act as sources, and the average value of the function on an expanding circle registers their presence, like a Geiger counter clicking faster as it approaches a radioactive source. This is the deep connection, the inherent unity, that makes Jensen's formula not just a tool, but a window into the very structure of analytic functions.

Applications and Interdisciplinary Connections

Now that we have acquainted ourselves with the machinery of Jensen’s formula, you might be feeling a bit like a mechanic who has just been shown a beautiful, intricate engine. It’s impressive, certainly, but the real fun begins when we turn the key and see where it can take us. What is this formula good for? The answer, it turns out, is astonishingly broad. Jensen's formula is not merely a curiosity of complex analysis; it is a fundamental bridge connecting the inner world of a function—the location of its zeros—to its outward appearance on a boundary. This simple-looking equation is a master key that unlocks doors in fields ranging from the foundations of algebra to the frontiers of number theory and the practicalities of modern engineering.

Foundational Insights: Counting Zeros and Proving the Unprovable

Let's start with one of the most fundamental truths in all of mathematics: the Fundamental Theorem of Algebra, which states that any non-constant polynomial must have at least one root in the complex plane. We all learn this theorem, but how does one prove it? There are many ways, but Jensen's formula offers a particularly elegant and insightful path.

Imagine, for a moment, a polynomial P(z)P(z)P(z) that defies this theorem—a rebel without a zero. If this were true, Jensen's formula would give us a very simple result. With no zeros inside any circle ∣z∣<R|z| < R∣z∣<R, the sum over the zeros vanishes completely, and the formula would declare: 12π∫02πln⁡∣P(Reiθ)∣ dθ=ln⁡∣P(0)∣\frac{1}{2\pi} \int_0^{2\pi} \ln|P(R e^{i\theta})| \, d\theta = \ln|P(0)|2π1​∫02π​ln∣P(Reiθ)∣dθ=ln∣P(0)∣ This equation must hold for any radius RRR. It claims that the average value of ln⁡∣P(z)∣\ln|P(z)|ln∣P(z)∣ on a circle of radius RRR is constant, no matter how large that circle gets! But we know that for a polynomial P(z)=anzn+⋯+a0P(z) = a_n z^n + \dots + a_0P(z)=an​zn+⋯+a0​ with n≥1n \ge 1n≥1, as RRR becomes very large, the term anzna_n z^nan​zn dominates everything else. The magnitude ∣P(z)∣|P(z)|∣P(z)∣ on a large circle behaves like ∣an∣Rn|a_n|R^n∣an​∣Rn. The logarithm of its magnitude, therefore, grows like ln⁡(∣an∣Rn)=ln⁡∣an∣+nln⁡R\ln(|a_n|R^n) = \ln|a_n| + n \ln Rln(∣an​∣Rn)=ln∣an​∣+nlnR. This value clearly is not constant; it grows to infinity with RRR. We have a contradiction! The integral on the left must grow with RRR, but the formula, under the "no zeros" assumption, says it must be constant. The only way out of this paradox is to concede that our initial assumption was wrong. A polynomial must have a zero. It’s a beautiful proof by contradiction, powered by the connection Jensen's formula forges between a function's growth and its zeros.

This idea of "counting zeros" can be made more quantitative. The formula doesn't just tell us that zeros exist; it helps us estimate how many there are. By rearranging the formula and making some careful estimates, one can derive a powerful upper bound on the number of zeros, n(r)n(r)n(r), inside a disk of radius rrr. This bound shows that n(r)n(r)n(r) is controlled by the ratio of the function's maximum value on a larger circle to its value at the center. In essence, if a function doesn't grow too quickly, it can't cram too many zeros near its center. This principle is a cornerstone of what is known as Nevanlinna theory, a vast generalization of these ideas that maps out the relationship between a function's growth and the distribution of its values.

Engineering and Physics: From Abstract Formulas to Real-World Systems

You might think this is all abstract mathematics, but these ideas have profoundly practical consequences. Consider the field of ​​digital signal processing​​, which underpins everything from your phone's audio equalizer to medical imaging. A digital filter is often described by a rational transfer function, H(z)=B(z)/A(z)H(z) = B(z)/A(z)H(z)=B(z)/A(z), which has its own set of zeros and poles (the zeros of A(z)A(z)A(z)).

The behavior of this filter—how it modifies frequencies—is captured by its values on the unit circle, H(ejω)H(e^{j\omega})H(ejω). The integral of ln⁡∣H(ejω)∣\ln|H(e^{j\omega})|ln∣H(ejω)∣ over the unit circle is a measure of the overall "energy" or "gain" of the filter's magnitude response. By applying Jensen's formula to the numerator and denominator separately, we arrive at a remarkable identity. It states that this integral is determined by the system's gain at zero frequency, the locations of its zeros inside the unit circle, and the locations of its poles inside the unit circle.

This has immediate implications for filter design. For a filter to be stable, all its poles must lie inside the unit circle. For it to be what's called "minimum phase"—a desirable property meaning it responds as quickly as possible to an input—all its zeros must also lie inside the unit circle. Jensen's formula provides the mathematical backbone for understanding these properties, linking the physical behavior of a system directly to the geometric locations of its poles and zeros in the complex plane.

This connection between interior points and a boundary value hints at a deep physical analogy. In ​​electrostatics​​, Gauss's law tells us that the total electric flux through a closed surface is proportional to the total charge enclosed. Jensen's formula can be seen as a cousin to this law in two dimensions. If we think of log⁡∣f(z)∣\log|f(z)|log∣f(z)∣ as an electrostatic potential, then the zeros and poles are like point charges. The formula relates the average potential on a circular boundary to the "charges" contained within. This unity of mathematical structure across different branches of science is precisely the kind of deep beauty that Feynman so cherished.

Frontiers of Modern Mathematics: Measuring Polynomials and Primes

The reach of Jensen's formula extends into the most profound and active areas of mathematical research. One such area is the study of polynomials themselves. How do you measure the "size" of a polynomial? One way is the ​​Mahler measure​​, which is defined as the geometric mean of its magnitude on the unit circle. Thanks to Jensen's formula, this integral definition is equivalent to a much more algebraic one: the magnitude of the leading coefficient multiplied by the product of the magnitudes of all roots that lie outside the unit circle.

This measure beautifully blends analysis (the integral) and algebra (the roots). It has surprising connections to linear algebra and number theory. For instance, the Mahler measure of the characteristic polynomial of a matrix tells us about the asymptotic growth of its powers. In some cases, these measures turn out to be special numbers, like the golden ratio.

Perhaps the most breathtaking application of Jensen's formula lies at the heart of number theory, in the study of the prime numbers. The distribution of primes is intimately connected to the zeros of the ​​Riemann zeta-function​​, ζ(s)\zeta(s)ζ(s). A related entire function, the Riemann xi-function ξ(s)\xi(s)ξ(s), has the same non-trivial zeros as ζ(s)\zeta(s)ζ(s), all of which are conjectured to lie on a single vertical line in the complex plane (the famous Riemann Hypothesis).

How many zeros are there up to a certain height TTT on the complex plane? This is a fundamental question in number theory. The answer is found by applying Jensen's formula to the ξ\xiξ-function. By estimating the integral of ln⁡∣ξ(s)∣\ln|\xi(s)|ln∣ξ(s)∣ on a very large circle using powerful tools like Stirling's approximation for the Gamma function, mathematicians derive the celebrated Riemann-von Mangoldt formula. This formula tells us that the number of zeros up to height TTT is approximately T2πln⁡(T2πe)\frac{T}{2\pi} \ln(\frac{T}{2\pi e})2πT​ln(2πeT​). A tool from complex analysis allows us to count the zeros of a function that holds the secrets of the primes. It is a spectacular demonstration of the unity of mathematics.

Finally, the formula serves as a powerful computational tool in its own right, turning difficult real integrals into straightforward calculations involving the locations of zeros. It also reveals deep structural constraints, showing, for example, that the presence of a multiple zero near the origin forces a polynomial's average value on the unit circle to be larger than it otherwise would be, a result that emerges from combining Jensen's formula with its namesake, Jensen's inequality.

From proving a 250-year-old theorem to designing modern electronics and probing the mysteries of prime numbers, Jensen's formula is a testament to the power of a single, elegant idea. It teaches us that to understand the whole, we must often look at the parts—and in the world of functions, the zeros are the parts that matter most.