try ai
Popular Science
Edit
Share
Feedback
  • Left and Right Eigenvectors

Left and Right Eigenvectors

SciencePediaSciencePedia
Key Takeaways
  • For non-symmetric matrices, eigenvectors split into two distinct families: right eigenvectors (intrinsic modes) and left eigenvectors (modes of observation), which are the right eigenvectors of the transposed matrix.
  • Left and right eigenvectors exhibit biorthogonality, meaning a left eigenvector is orthogonal to every right eigenvector except its corresponding partner.
  • This duality has practical applications: the right eigenvector often describes a system's structure or shape, while the left eigenvector quantifies the value or sensitivity of its components.
  • The angle between corresponding left and right eigenvectors determines an eigenvalue's sensitivity, serving as a critical indicator of a system's stability and robustness.

Introduction

We often begin our study of physics and mathematics in an idealized world of symmetric systems, where eigenvectors form a simple, orthogonal framework. However, most real-world systems—from ecological populations to engineered structures—are non-symmetric, losing this convenient property. This departure from symmetry introduces a fascinating duality: the splitting of eigenvectors into two distinct but related families, the left and the right. This article demystifies this crucial concept, which is essential for understanding the behavior of complex, realistic systems.

This exploration is divided into two parts. First, under "Principles and Mechanisms," we will delve into the fundamental definitions of left and right eigenvectors, uncover their elegant and powerful relationship of biorthogonality, and see how they provide a natural system for measurement and stability analysis. Then, in "Applications and Interdisciplinary Connections," we will journey through diverse fields—from ecology and control theory to quantum mechanics—to witness how this single mathematical idea provides a profound, unifying framework for describing the structure, value, and stability of the complex systems that surround us.

Principles and Mechanisms

In our journey through physics, we often start in a beautiful, idealized world. We study things that are perfectly balanced, reversible, and symmetrical. Think of a perfectly elastic collision, or the pure tones of a perfectly uniform guitar string. The mathematics describing these systems is equally elegant. The matrices or operators are typically ​​symmetric​​ (or ​​Hermitian​​ in quantum mechanics), which means they are identical to their own transpose (or conjugate transpose). Their eigenvectors—the special directions in which the system's behavior simplifies to mere stretching—form a perfectly perpendicular, or ​​orthogonal​​, set of axes. This is a wonderfully convenient state of affairs, as these orthogonal eigenvectors provide a natural, stable "grid" upon which we can analyze any state of the system. For such a symmetric matrix, if we have a right eigenvector xix_ixi​ (the kind we usually learn about), we can always choose its corresponding left eigenvector yiy_iyi​ to be the same vector. Normalizing it to unit length, ∥xi∥2=1\|x_i\|_2=1∥xi​∥2​=1, automatically gives us the tidy relation yiTxi=xiTxi=1y_i^T x_i = x_i^T x_i = 1yiT​xi​=xiT​xi​=1.

But the real world is rarely so pristine. Most systems are "open": they interact with their environment, they lose energy, they have feedback loops, and they are often far from equilibrium. A spinning planet is subject to gyroscopic forces. The dynamics of a population are not reversible. A chemical reaction proceeds in one direction. In these cases, the matrices that govern the system's evolution are ​​non-symmetric​​. And when we step into this more realistic, "crooked" world, something fascinating happens to our familiar concept of eigenvectors. They split.

The Two Families: Right and Left Eigenvectors

For a non-symmetric matrix AAA, the single family of eigenvectors cleaves into two distinct but related families: the right eigenvectors and the left eigenvectors.

The ​​right eigenvectors​​, which we will call rrr, are the ones you already know. They are the vectors that, when acted upon by the matrix AAA, are simply scaled by the eigenvalue λ\lambdaλ. They represent the intrinsic "modes" of behavior of the system.

Ar=λrA r = \lambda rAr=λr

The ​​left eigenvectors​​, which we'll call lll, might seem a bit more mysterious. They are defined by an equation that looks like the mirror image of the first:

lTA=λlTl^T A = \lambda l^TlTA=λlT

What does this mean? Instead of the matrix acting on the vector, the vector (transposed into a row) acts on the matrix from the left. A more intuitive way to think about this is to transpose the entire equation. Doing so gives us ATl=λlA^T l = \lambda lATl=λl. This reveals the secret: a left eigenvector of AAA is simply a right eigenvector of the transposed matrix, ATA^TAT. They represent a kind of "dual" mode, a way of observing or measuring the system that is specially attuned to its intrinsic behaviors. They carry the same eigenvalue λ\lambdaλ as their right-sided partners because the determinant of (A−λI)(A - \lambda I)(A−λI) is always the same as the determinant of its transpose, (AT−λI)(A^T - \lambda I)(AT−λI).

Biorthogonality: A New Kind of Order

Here is where the story takes a sharp turn. For a non-symmetric matrix, if you take two different right eigenvectors, r1r_1r1​ and r2r_2r2​, they are generally not orthogonal. The beautiful perpendicular grid is gone, shattered into a skewed and seemingly chaotic set of directions. A concrete example shows this immediately: the matrix L=(1102)L = \begin{pmatrix} 1 & 1 \\ 0 & 2 \end{pmatrix}L=(10​12​) has right eigenvectors r1=(10)r_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}r1​=(10​) and r2=(11)r_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}r2​=(11​). Their dot product is r1Tr2=1r_1^T r_2 = 1r1T​r2​=1, not zero. So, have we lost all sense of order?

No! Nature has just replaced one kind of order with another, more subtle one. While the right eigenvectors may ignore each other, and the left eigenvectors may ignore each other, the two families are intimately connected through a beautiful relationship called ​​biorthogonality​​.

Consider a left eigenvector ljl_jlj​ for eigenvalue λj\lambda_jλj​ and a right eigenvector rir_iri​ for eigenvalue λi\lambda_iλi​. Let's look at the quantity ljTAril_j^T A r_iljT​Ari​. We can calculate it in two ways.

  1. Group it as ljT(Ari)l_j^T (A r_i)ljT​(Ari​). Since Ari=λiriA r_i = \lambda_i r_iAri​=λi​ri​, this becomes ljT(λiri)=λi(ljTri)l_j^T (\lambda_i r_i) = \lambda_i (l_j^T r_i)ljT​(λi​ri​)=λi​(ljT​ri​).
  2. Group it as (ljTA)ri(l_j^T A) r_i(ljT​A)ri​. Since ljTA=λjljTl_j^T A = \lambda_j l_j^TljT​A=λj​ljT​, this becomes (λjljT)ri=λj(ljTri)(\lambda_j l_j^T) r_i = \lambda_j (l_j^T r_i)(λj​ljT​)ri​=λj​(ljT​ri​).

Equating the two results gives us a jewel of an equation: (λi−λj)(ljTri)=0(\lambda_i - \lambda_j) (l_j^T r_i) = 0(λi​−λj​)(ljT​ri​)=0

This simple result is profound. If the eigenvalues are distinct (λi≠λj\lambda_i \neq \lambda_jλi​=λj​), the only way for this equation to hold is if the other term is zero:

ljTri=0for i≠jl_j^T r_i = 0 \quad \text{for } i \neq jljT​ri​=0for i=j

This is the principle of biorthogonality. Every left eigenvector is orthogonal to every right eigenvector except for its own corresponding partner. Imagine two sets of skewed axes. A vector along one axis in the "right" set is not perpendicular to the other axes in its own set, but it is perfectly perpendicular to the "other" axes in the "left" set. This hidden orthogonality is the key that unlocks the analysis of non-symmetric systems. It's not just a mathematical curiosity; it can be used directly, for instance, to determine unknown components of an eigenvector if you know its partner must be orthogonal to other eigenvectors of the system.

A System of Measurement

What about the case when i=ji=ji=j? The product liTril_i^T r_iliT​ri​ is, in general, not zero (if it were, the eigenvalue would not be simple). This non-zero number depends on how we've arbitrarily scaled our eigenvectors, since any multiple of an eigenvector is still an eigenvector. This freedom allows us to establish a wonderfully convenient convention. We can always scale the pairs (li,ri)(l_i, r_i)(li​,ri​) such that their product is exactly one. This gives us the full ​​biorthonormality condition​​:

liTrj=δijl_i^T r_j = \delta_{ij}liT​rj​=δij​

where δij\delta_{ij}δij​ is the Kronecker delta (1 if i=ji=ji=j, and 0 otherwise). Note that this normalization isn't unique; if we scale lil_ili​ by a factor kkk, we can just scale rir_iri​ by 1/k1/k1/k and the product remains 1.

This normalization is incredibly powerful. Suppose we want to decompose an initial state of a system, x0x_0x0​, into its fundamental modes: x0=c1r1+c2r2+⋯+cnrnx_0 = c_1 r_1 + c_2 r_2 + \dots + c_n r_nx0​=c1​r1​+c2​r2​+⋯+cn​rn​. In the old world of symmetric matrices, we would find the coefficient cic_ici​ by taking the dot product with the eigenvector rir_iri​. In our new, non-symmetric world, that won't work because the rir_iri​ vectors are not orthogonal to each other. But with our left eigenvectors in hand, the solution is just as elegant. To find the coefficient cic_ici​, we simply "project" our state x0x_0x0​ using the corresponding left eigenvector lil_ili​:

liTx0=liT(c1r1+c2r2+⋯+cnrn)l_i^T x_0 = l_i^T (c_1 r_1 + c_2 r_2 + \dots + c_n r_n)liT​x0​=liT​(c1​r1​+c2​r2​+⋯+cn​rn​)

Because of biorthogonality, liTrjl_i^T r_jliT​rj​ is zero for all terms except where j=ij=ij=i. The equation collapses beautifully:

liTx0=ci(liTri)=ci⋅1=cil_i^T x_0 = c_i (l_i^T r_i) = c_i \cdot 1 = c_iliT​x0​=ci​(liT​ri​)=ci​⋅1=ci​

So, the left eigenvectors provide the perfect set of tools for measuring the components of the right eigenvectors. They are the natural "probes" or "detectors" for the system's fundamental modes. This duality is expressed mathematically through the ​​completeness relation​​: the identity operator I\mathbb{I}I can be written as the sum of all the mode projectors: I=∑n∣Rn⟩⟨Ln∣\mathbb{I} = \sum_n |R_n \rangle \langle L_n |I=∑n​∣Rn​⟩⟨Ln​∣.

This principle extends far beyond simple matrix systems. In structural dynamics, systems are often described by a generalized eigenproblem like Kr=λMrK r = \lambda M rKr=λMr. Here, biorthogonality manifests with respect to the mass matrix MMM: the proper relation is liTMrj=δijl_i^T M r_j = \delta_{ij}liT​Mrj​=δij​. The physics of the problem dictates the "inner product" that reveals the underlying order. Likewise, in quantum mechanics, the expectation value of an observable O^\hat{O}O^ in non-Hermitian systems is not the familiar ⟨ψ∣O^∣ψ⟩\langle \psi | \hat{O} | \psi \rangle⟨ψ∣O^∣ψ⟩, but a "sandwich" formed by the left and right states: ⟨L∣O^∣R⟩\langle L | \hat{O} | R \rangle⟨L∣O^∣R⟩.

Sensitivity and the Angle of Near-Collapse

There is one final, crucial insight that left and right eigenvectors provide. They tell us how stable a system's modes are. The sensitivity of an eigenvalue λ\lambdaλ to small perturbations in the matrix AAA is captured by the ​​eigenvalue condition number​​, κ(λ)\kappa(\lambda)κ(λ):

κ(λ)=∥r∥∥l∥∣lTr∣\kappa(\lambda) = \frac{\|r\| \|l\|}{|l^T r|}κ(λ)=∣lTr∣∥r∥∥l∥​

Notice the denominator: it's the product lTrl^T rlTr that we just discussed. If the left and right eigenvectors are nearly parallel, this value is large and the eigenvalue is robust. But what if lll and rrr are nearly orthogonal? Then lTrl^T rlTr approaches zero, and the condition number κ(λ)\kappa(\lambda)κ(λ) blows up to infinity!

This isn't just a mathematical abstraction. It happens when a matrix is close to becoming ​​defective​​—a point where two or more eigenvalues and their corresponding eigenvectors coalesce into a single, inseparable mode known as a Jordan block. At this critical point, the left and right eigenvectors become exactly orthogonal, lTr=0l^T r = 0lTr=0.

Consider the matrix A(ε)=(01ε0)A(\varepsilon) = \begin{pmatrix} 0 & 1 \\ \varepsilon & 0 \end{pmatrix}A(ε)=(0ε​10​). For any ε>0\varepsilon > 0ε>0, it has two distinct eigenvalues ±ε\pm\sqrt{\varepsilon}±ε​ and is perfectly well-behaved. But as ε\varepsilonε gets closer to zero, the left and right eigenvectors swing around until they become almost perpendicular. The condition number for the eigenvalues behaves like 1/(2ε)1/(2\sqrt{\varepsilon})1/(2ε​), soaring to infinity as ε→0\varepsilon \to 0ε→0. This means that for a physical system described by such a matrix, even the tiniest amount of noise or perturbation can cause wild swings in its observed behavior. The angle between the left and right eigenvectors is therefore a powerful diagnostic tool, a warning sign that the system is approaching a point of extreme sensitivity and instability.

Thus, the distinction between left and right eigenvectors is not a mere complication. It is a doorway to a deeper understanding of the complex, non-symmetric systems that constitute so much of our world, revealing a hidden biorthogonal structure, providing a natural system of measurement, and warning us of the perilous points of instability.

Applications and Interdisciplinary Connections

Now that we have met these two characters, the left and the right eigenvectors, and understood their peculiar relationship of biorthogonality, we might ask: what are they good for? If they were merely a mathematical curiosity, they wouldn't command our attention for so long. But the truth is far more exciting. This duality—this pairing of a "right-hand" vector that gets transformed and a "left-hand" vector that measures transformations—appears in a startling number of places. It provides a unified language to describe the behavior, stability, and control of complex systems all across science and engineering. Let us take a journey through some of these seemingly disconnected fields and see how this one idea ties them all together.

The Shape and Value of the Future: Ecology and Demography

Perhaps the most intuitive and beautiful application is found in ecology, when we try to predict the future of a population. Imagine a species with several life stages: juvenile, young adult, mature adult. We can write down a matrix, let's call it AAA, that tells us how many individuals in each stage next year are produced by the individuals in each stage this year. This is a population projection matrix. If we start with a population vector ntn_tnt​, the population next year is nt+1=Antn_{t+1} = A n_tnt+1​=Ant​.

What happens in the long run? The population settles into a steady pattern of growth, where the proportions of individuals in each stage become constant. This fixed set of proportions is the ​​stable stage distribution​​, and it is nothing other than the dominant right eigenvector, www, of the matrix AAA. It answers the question: "What will the population's structure look like in the future?" The right eigenvector describes the ultimate shape or form of the system.

But what about the left eigenvector, vvv? It answers a different, more subtle question: "What is the value of an individual in each stage to the future growth of the population?" A juvenile might not be reproducing now, but it has the potential to survive and reproduce for many years. A very old individual might still be reproducing, but it has little future left. The left eigenvector assigns a number to each stage, called its ​​reproductive value​​, that precisely quantifies this contribution to all future generations. The total reproductive value of the entire population, vTntv^T n_tvTnt​, grows at a clean, predictable rate given by the dominant eigenvalue λ\lambdaλ. The left eigenvector, then, tells us about the intrinsic worth or potential of each part of the system. This elegant duality of "shape" and "value" is our first clue to the power of this mathematical pairing.

Designing for Control: Engineering Vibrations and Systems

This duality of shape and value finds a powerful echo in engineering, where we want to not only understand systems but also control them. Consider a complex structure like an airplane wing or a bridge. It can vibrate in many different ways, called modes. For simple, idealized systems, these modes are nicely independent. But in the real world, the damping in a structure is often "non-proportional"—imagine a bridge made of steel beams connected by rubber joints. The way energy dissipates is complex and couples the modes together. The system's dynamics are governed by a non-symmetric state-space matrix.

How can we possibly analyze such a mess? Once again, the left and right eigenvectors come to the rescue. By finding the complete set of right eigenvectors (the modal "shapes") and their corresponding left eigenvectors, we can perform a mathematical transformation that completely decouples the complicated equations of motion into a set of simple, independent equations, one for each mode. This technique, called modal analysis, is possible only because of the biorthogonality between the left and right eigenvectors. It is the fundamental tool that allows engineers to understand and predict the vibrations of nearly any complex linear structure.

Knowing the modes is one thing; controlling them is another. Suppose we want to place actuators (like thrusters or shakers) and sensors on our structure. Where should we put them for maximum effect? Control theory provides a stunningly clear answer using our two types of eigenvectors. To best excite or control a particular mode, you should place an actuator at a location where its force projects strongly onto that mode's ​​left eigenvector​​. The left eigenvector tells you where the system is most receptive to being "pushed" for that mode. Conversely, to best measure or observe a mode, you should place a sensor where that mode's ​​right eigenvector​​ has a large component. The right eigenvector tells you where the system's "shape" moves the most for that mode. The effectiveness of a control loop—from actuator to sensor—is proportional to the product of these two projections. This gives engineers a precise recipe for designing smart structures, from noise-canceling headphones to satellites that hold a steady orientation.

Fragility and Surprising Gains: The Peculiar World of Non-Normal Systems

So far, our eigenvectors have been well-behaved tools for understanding and design. But for non-symmetric (or, more generally, non-normal) matrices, they hide a strange and counter-intuitive world. In a symmetric system, the left and right eigenvectors are the same. They form a nice, orthogonal set, like the axes of a coordinate system. In a non-normal system, they are different, and the angle between a corresponding left and right eigenvector can be very large—they can become nearly orthogonal to each other.

When this happens, the system becomes "fragile" or "ill-conditioned". An eigenvalue that has nearly orthogonal left and right eigenvectors is incredibly sensitive to small perturbations of the matrix. A tiny change in the system can cause a huge shift in the eigenvalue. This has profound implications for numerical computation. Algorithms that try to find these eigenvalues can become unstable, because small floating-point errors get amplified catastrophically. The stability of our computational world depends on the left and right eigenvectors not getting too close to orthogonal! Physicists and mathematicians even have a name for the measure of this non-orthogonality: the ​​Petermann factor​​, which has a direct physical meaning in laser physics, quantifying the excess noise generated in a laser cavity due to its non-orthogonal modes.

This fragility is one side of a coin. The other side is even stranger: the potential for enormous, but transient, amplification. In a symmetric system, the maximum amplification the matrix can impart on any vector is simply its largest eigenvalue. In a non-normal system, this is not true! Certain input vectors can be amplified by an amount far exceeding the largest eigenvalue. This maximum amplification is given by the largest singular value, not the largest eigenvalue. The input direction that achieves this massive gain is not an eigenvector at all. And the output points in yet another direction! This phenomenon of transient growth is crucial in fields like fluid dynamics, where it can explain the transition to turbulence even when all the eigenmodes are stable and decaying. The asymmetry between left and right eigenvectors opens the door to a world where a system can be stable in the long run, yet exhibit wild excursions in the short term.

Unveiling the Hidden Order: From Molecules to Reactions

The final stop on our tour is at the frontiers of chemistry and quantum physics, where systems are dizzyingly complex. Consider the chemical reactions happening in a flame—a chaotic dance of thousands of chemical species interacting on timescales from femtoseconds to seconds. How can we ever hope to model this? The equations are governed by a massive, non-symmetric Jacobian matrix. The key insight of methods like Intrinsic Low-Dimensional Manifold (ILDM) theory is that most of this action is "fast" and uninteresting; the overall behavior is governed by a few "slow" processes.

To find this hidden simplicity, scientists compute the left and right eigenvectors of the Jacobian. With these, they construct "projectors"—mathematical operators that can take any state of the system and perfectly separate it into its fast-moving components and its slow-moving components. By throwing away the fast parts, they can reduce a model with thousands of variables to one with just a handful, without losing the essential physics. This powerful dimensionality reduction, which makes modern combustion and atmospheric modeling possible, is built entirely on the foundation of biorthogonality.

This principle reaches its apex in the quantum world. As we saw in the previous chapter, quantum mechanics is built on Hermitian operators, which have a perfect symmetry of left and right eigenvectors. But what happens when a quantum system—say, a molecule—is not isolated, but is interacting with its environment, perhaps by absorbing or emitting light? Such "open quantum systems" are no longer described by Hermitian Hamiltonians. The new, non-Hermitian Hamiltonian has distinct left and right eigenvectors.

Here, they take on distinct physical jobs. The right eigenvectors are used to construct the quantum state vectors, the kets ∣Ψk⟩|\Psi_k\rangle∣Ψk​⟩. The left eigenvectors are used to construct the dual vectors, the bras ⟨Ψk∣\langle\Psi_k|⟨Ψk​∣. Because the system is non-Hermitian, the bra ⟨Ψk∣\langle\Psi_k|⟨Ψk​∣ is not the conjugate transpose of the ket ∣Ψk⟩|\Psi_k\rangle∣Ψk​⟩. To calculate any measurable quantity—like the average position of an electron or the probability of a transition from one state to another—one must form a "sandwich" ⟨Ψi∣O^∣Ψj⟩\langle \Psi_i | \hat{O} | \Psi_j \rangle⟨Ψi​∣O^∣Ψj​⟩ using the "bra" from the left eigenvector and the "ket" from the right eigenvector. In the strange, asymmetric world of open quantum systems, you simply cannot do physics without both.

From counting animal populations to guiding spacecraft, from ensuring our computer simulations are stable to predicting the properties of molecules, the subtle and elegant duality of left and right eigenvectors provides a profound, unifying framework. It is a stunning example of how a single mathematical idea can illuminate the fundamental structure and behavior of our world in so many different corners.