
Eigenvalues are the hidden numbers that define the fundamental properties of many systems in science and engineering, representing everything from the resonant frequencies of a bridge to the energy levels of an atom. A natural and fundamental question arises when we combine two systems: if we add two matrices, and , what are the eigenvalues of their sum, ? The most intuitive answer—that the new eigenvalues are simply the sums of the old ones—is, fascinatingly, almost always incorrect. This failure of simple intuition signals the presence of a deeper, more structured mathematical reality.
This article delves into the elegant rules that govern the eigenvalues of a matrix sum. It addresses the gap between our simple intuition and the complex behavior observed in practice, providing a clear map of this important corner of linear algebra. The journey begins in the "Principles and Mechanisms" section, where we will uncover the one unbreakable law that governs these eigenvalues, explore the special harmonious case where they do add up, and finally, build a "corral" of constraints, like the famous Weyl's inequalities, that fence in the possibilities for the general case. Following this, the "Applications and Interdisciplinary Connections" section will reveal how these theoretical principles are not abstract curiosities but are essential tools for understanding interaction, perturbation, and emergence in fields ranging from quantum mechanics to network science.
Imagine you have two guitar strings, each with its own fundamental frequency, or "note." What happens if you try to somehow "add" these two strings together? What would the new note be? Our first, simple-minded guess might be that the new frequency is just the sum of the old ones. It's an appealingly simple idea. In the world of matrices and their eigenvalues—which, as you know, represent fundamental properties like frequencies, energies, or rates of change—this same simple question arises. If we add two matrices, and , to get a new matrix , are the eigenvalues of simply the sums of the eigenvalues of and ?
Nature, it turns out, is a bit more subtle and interesting than that. Let's get our hands dirty and test this simple-minded hypothesis. It's always a good idea in physics and mathematics to test a grand claim with a simple example.
Consider two very ordinary-looking matrices. Let's say matrix has eigenvalues and matrix has eigenvalues . If our hypothesis were true, the sum should have eigenvalues , or maybe . But if we actually perform the calculation for specific matrices, such as those in a straightforward exercise, we find something completely different. For a particular choice, the eigenvalues of the sum turn out to be , which is approximately . This isn't just a near miss; it's a completely different answer!
So, our initial beautiful, simple idea is wrong. It's shattered. This is a classic moment in science! When a simple intuition fails, it means there's something deeper and more wonderful going on. The question is no longer "Do they add up?" but "If not, what are the rules?" What governs the eigenvalues of a sum?
All is not lost. While the individual eigenvalues don't behave so simply, there is one property that follows a wonderfully straightforward law. This property is the trace of a matrix—the sum of the elements on its main diagonal. For any two square matrices and , it is a simple fact that .
What does this have to do with eigenvalues? Well, one of the magical properties of the trace is that it is also equal to the sum of all the eigenvalues of the matrix. For any matrix , we have .
Putting these two facts together gives us our first, inviolable law for the eigenvalues of a sum: The sum of the "new" eigenvalues is precisely the sum of all the "old" ones. In our previous example, the sum of the eigenvalues of was , and for it was . Their total sum is . For the sum matrix , the eigenvalues were and , and their sum is indeed . The law holds! This is our anchor, a solid piece of ground in a shifting landscape. It tells us that while individual eigenvalues can change in complex ways, they are bound by this conservation-like law of their sum.
The reason our initial guess failed is that matrices have "directionality"—their action depends on the direction of the vector they are multiplying. The eigenvalues are intimately tied to a special set of directions for each matrix, its eigenvectors. When we add two matrices, and , their special directions are generally not aligned. It’s like trying to combine two different musical scales that don't share the same root note; the result is complex, not a simple superposition.
But what if they do align? What if the matrices share the same set of special directions? In the language of linear algebra, this happens when the matrices commute, meaning . For symmetric or Hermitian matrices (the kind that show up most often in physics), this condition means they can be diagonalized simultaneously—we can find a single coordinate system where both matrices are just stretches along the axes.
In this special, harmonious case, our initial intuition is gloriously restored! If we line up the matrices in their shared eigenbasis, adding them is just a matter of adding their diagonal entries. The eigenvalues of the sum are indeed the sums of the corresponding eigenvalues of and . This teaches us a crucial lesson: the complexity arises from the misalignment of the eigenvectors. The problem of the eigenvalues of a sum is fundamentally a problem about geometric orientation.
So, we have one exact law (the trace) and one special case (commuting matrices). What about the vast, general case of non-commuting matrices? If we can't have an exact formula for each eigenvalue, can we at least put a fence around the possibilities? Can we establish upper and lower bounds?
Yes, we can! Let's think about the largest eigenvalue, which we'll call . We can think of it as the maximum "stretch" a matrix can apply to any vector. Using a tool called the Rayleigh quotient, which formalizes this idea of stretch, we can reason as follows: the stretch of the sum matrix, , applied to some vector , is just the sum of the individual stretches, and . The stretch from is, at most, , and the stretch from is at most . It's natural, then, to conclude that the maximum possible stretch from their sum can't be more than the sum of their individual maximums.
This intuition proves correct. For any two Hermitian matrices and , we have a beautiful inequality that looks a lot like the triangle inequality for vectors: And similarly, for the smallest eigenvalue, the minimum "stretch" (which can be a compression or negative stretch) is also bounded from below: These inequalities act like the first two posts of a corral. They tell us that the spectrum of the sum matrix can't just wander off to infinity; it's constrained by the spectra of the original matrices.
The story gets even better. These bounds on the largest and smallest eigenvalues are just the opening notes of a much grander symphony of inequalities discovered by the great mathematician Hermann Weyl.
Weyl's inequalities provide a complete set of constraints that connect all the eigenvalues. Let's sort the eigenvalues of our matrices in descending order, from largest to smallest: . Weyl's results tell us, for example, that the -th eigenvalue of the sum, , is squeezed between two values. The upper bound involves summing the -th eigenvalue of and the -th eigenvalue of (with some constraints on and ), and the lower bound does the same but pulling from the opposite end of 's spectrum.
A particularly lovely and intuitive case arises when we add a positive semidefinite matrix —a matrix whose eigenvalues are all non-negative ( for all ). Think of this as adding a "purely positive" perturbation. In this case, Weyl's inequalities simplify beautifully to tell us that every single eigenvalue of the sum is greater than or equal to the corresponding eigenvalue of the original matrix: Adding a "positive" matrix pushes all the eigenvalues up. It’s like strengthening all the springs in a vibrating system; every mode of vibration increases in frequency.
For decades, these inequalities provided a set of necessary conditions. But were they the whole story? If you gave me three lists of numbers—spectra for , , and a potential —that satisfied all of Weyl's inequalities and the trace identity, could I always find matrices and with the given spectra such that their sum had the target spectrum? This profound question was known as Horn's problem. After decades of work by many mathematicians, the answer was proven to be a resounding yes.
This means that Weyl's inequalities (and their generalizations by Alfred Horn) are not just loose bounds; they are the definitive, tight description of what is possible. They carve out a precise geometric shape (a convex polytope) in the space of eigenvalues, and any point inside that shape is an attainable reality. For instance, if we know the eigenvalues of two matrices are, say, and , these inequalities can tell us precisely that the middle eigenvalue of their sum, , must lie in the interval , and moreover, any value in that interval is achievable by some choice of matrices.
The fact that these bounds are tight is incredibly powerful. It means the boundaries of the "polytope of possibilities" are not just theoretical fences, but destinations we can actually reach. How? By carefully aligning the eigenvectors of and .
Let's think about this in concrete terms, using an example from quantum mechanics. The eigenvalues of a Hamiltonian matrix represent the possible energy levels of a system. Suppose a total Hamiltonian is a sum of two parts, , where we know the energy spectra of and but are free to choose their "orientation" (i.e., their eigenbases). What is the maximum possible ground state energy (the lowest eigenvalue, ) of the combined system?
Our lower bound formula says . This minimum-of-all-minimums is achieved when the eigenvectors for the smallest eigenvalues of and are aligned. But to maximize the lowest energy, we must do something clever. It turns out, we have to employ a strategy of "pairing opposites." To push the lowest possible sum up as high as we can, we must align the eigenvector for the smallest eigenvalue of with the eigenvector for the largest eigenvalue of . We align the second-smallest of with the second-largest of , and so on, in a perfect anti-alignment. This is a consequence of a deep result known as the rearrangement inequality.
By strategically arranging the component parts, we can "steer" the resulting eigenvalues to any point within the allowed region defined by a family of theorems from Lidskii, Wielandt, and Horn. We can, for example, aim to maximize the second smallest eigenvalue or any other eigenvalue by choosing the right alignment.
And so, we've come full circle. We started with a simple, flawed guess. Its failure led us down a rabbit hole, where we found a single conservation law, a special case of harmony, and then a symphony of inequalities that caged the possibilities. Finally, we learned that these cages are not prisons; they are playgrounds, and by understanding the rules of alignment, we become the masters, able to construct systems that touch the very edges of what is possible. The eigenvalues of a sum are not a simple sum of eigenvalues, but a rich, structured interplay of geometry, constraints, and possibility.
After our deep dive into the mechanics of eigenvalues and eigenvectors, it's natural to ask: where does this all lead? What good is it? The answer, as is so often the case in physics and mathematics, is that these ideas are not merely abstract curiosities. They are the language used to describe a vast range of phenomena, from the stability of bridges to the energy levels of atoms. In this chapter, we’ll explore how the seemingly narrow question—"What are the eigenvalues of a sum of two matrices?"—unlocks profound insights across science and engineering.
You might be tempted to start with a wonderfully simple guess. If we have a system described by a matrix and we add a contribution described by a matrix , perhaps the characteristic values of the combined system are just the sums of the characteristic values of and ? It feels right. It's clean, it's simple. And it is almost always wrong.
This is not a failure of our intuition, but rather our first clue that something more interesting is afoot. Nature is rarely so simple as to just add things up. When two systems are combined, they interact, they interfere, they create new collective behaviors that are not just the sum of their parts. The mathematics of matrix sums reflects this physical reality. Consider two of the most fundamental matrices in quantum mechanics, the Pauli matrices and , which describe the spin of a particle like an electron along the x and z axes. Both matrices have eigenvalues of and . If we add them, does our new matrix have eigenvalues like , , or ? Not at all. Its eigenvalues are and . The act of "summing" the matrices, representing the consideration of a spin oriented between the two axes, has created entirely new characteristic values. More generally, it is easy to construct matrices and whose eigenvalues are all zero, yet their sum has non-zero eigenvalues.
This non-additive nature is not a mathematical bug; it's a feature. It is the signature of non-trivial interaction. So, our journey begins here: if simple addition fails, what rules govern the symphony of sums?
There is, in fact, a special condition under which our simple intuition is gloriously correct. If two matrices and commute—that is, if —then a wonderful simplification occurs. Intuitively, commuting operators correspond to processes or measurements that don't interfere with each other; the order in which you apply them doesn't matter. For such matrices, there exists a common set of eigenvectors. Think of these eigenvectors as special, "privileged" directions in space. Along these directions, both matrices and simply act like scalars, stretching or shrinking vectors without changing their direction. When you add and , you are simply adding their respective scaling factors along these shared directions. The result is that the eigenvalues of are indeed the sums of the corresponding eigenvalues of and . This is the ideal, cooperative case, a world without interference.
This idea of simple composition appears in other, more sophisticated forms as well. In network science, for example, we often want to understand the properties of a large, complex network by seeing it as a combination of smaller, simpler graphs. A powerful way to do this is with the Kronecker sum. Imagine you have a simple path graph (a line of nodes) and a small complete graph (every node connected to every other). The Kronecker sum of their corresponding Laplacian matrices describes the Laplacian of a new, larger graph that looks like a "product" of the original two. Miraculously, the eigenvalues of this new, complex graph are simply all possible sums of an eigenvalue from the first graph and an eigenvalue from the second. So, while standard matrix addition is tricky, other forms of composition can yield this beautiful, predictable simplicity. The lesson is that the rules of combination are just as important as the things being combined.
In the real world, perfect commutation is rare. More often, we are interested in what happens when we make a small change to a system. We have a system described by —say, a perfectly periodic crystal lattice—and we add a small perturbation , perhaps a single impurity atom or a slight deformation. We don't expect the fundamental nature of the system to change, but its characteristic values (like its electronic energy levels or vibrational frequencies) will shift slightly. How much?
This is the domain of perturbation theory. For small perturbations, we can derive a beautiful and powerful approximation. The change in the -th eigenvalue is, to a first order, given by the expression . Let's unpack that. The term represents the "response" of the original system's -th mode (described by its left and right eigenvectors, and ) to the perturbation . It tells us that the eigenvalue shift depends not just on the perturbation itself, but on how that perturbation "aligns" with the natural modes of the original system. Some modes might be very sensitive to a particular change, while others are barely affected. This formula is the bedrock of countless models in physics and engineering, allowing us to calculate how the energy levels of an atom shift in an electric field, or how the resonant frequencies of a mechanical structure change when a small mass is added.
What if the change isn't small? What if we combine two systems of comparable strength? Our approximation breaks down. We can no longer predict the exact eigenvalues. But are we lost? No! We can still set hard limits on what's possible. This is the gift of the Weyl inequalities.
For Hermitian matrices, which are ubiquitous in quantum mechanics and many other areas of physics, Weyl's inequalities provide a rigorous set of upper and lower bounds on the eigenvalues of the sum. For instance, the largest eigenvalue of the sum, , can be no larger than the sum of the largest eigenvalues of the parts, . Similar inequalities bound all the other eigenvalues, effectively "sandwiching" them in predictable intervals.
This is incredibly powerful. Imagine you know the possible energy levels of two separate quantum systems. If you bring them together, you might not be able to calculate the exact energy levels of the combined system easily, but Weyl's inequalities tell you the allowed range for these new energies. It gives you a non-negotiable budget for reality.
Even more cleverly, we can turn this idea on its head. Suppose we have a system , it interacts with some unknown process , and we measure the final system . If we know the eigenvalues of and , we can use the Weyl inequalities to deduce guaranteed bounds on the eigenvalues of the unknown interaction . This is like being a detective for physical systems. By observing the "before" and "after," we can characterize the "what happened in between."
Armed with these concepts, we can see the footprint of eigenvalue sums everywhere.
In Quantum Chemistry, simple models of molecules like the Hückel method represent the molecule as a Hamiltonian matrix. The diagonal elements are the intrinsic energies of electrons at each atomic site, and the off-diagonal elements represent the energy of electrons hopping between bonded atoms. The total electronic energy, which determines the molecule's stability, depends on the sum of the eigenvalues (orbital energies) of this matrix. Understanding how the eigenvalues arise from this sum of site energies and bonding interactions is the very heart of understanding chemical bonds.
In Signal Processing, Hadamard matrices are fundamental tools for encoding information in a way that is robust to noise. If we have a signal represented by a Hadamard matrix and it gets corrupted by a specific type of structured noise represented by a matrix , the resulting signal is . Remarkably, for certain structures, like a rank-1 perturbation, we can go beyond mere bounds and find the exact new eigenvalues. This allows engineers to perfectly characterize the effect of certain types of noise and, potentially, to reverse it.
In the most fundamental physics, the theory of identical particles (like electrons or photons) is governed by the symmetries of permutation, described by the symmetric group . Incredibly, key operators in this theory, the Jucys-Murphy elements, are defined as sums of simpler permutation operators (transpositions). The eigenvalues of these summed operators act as unique labels—like quantum numbers—for the states of multi-particle systems, distinguishing how they behave under particle exchange. This shows the concept of an operator sum penetrating into the deepest and most abstract descriptions of our physical world.
From the simple counterexample of Pauli matrices to the profound bounds of Weyl, the study of the eigenvalues of a matrix sum is far more than a mathematical exercise. It is a story about interaction, interference, and emergence. It teaches us that to understand a composite system, we must understand not just its parts, but the rich and subtle rules of their composition.