try ai
Popular Science
Edit
Share
Feedback
  • Range of a Linear Transformation

Range of a Linear Transformation

SciencePediaSciencePedia
  • The range of a linear transformation is the set of all possible outputs, which always forms a structured subspace within the output space (codomain).
  • For a transformation represented by a matrix, the range is identical to the column space—the span of the matrix's column vectors.
  • The dimension of the range, known as the rank, is linked to the input space's dimension and the kernel's dimension (nullity) by the Rank-Nullity Theorem.
  • Understanding the range is crucial for determining if a transformation is surjective (onto) and, for square matrices, if it is invertible.

Introduction

A linear transformation acts like a machine, taking an input vector from one space and producing an output vector in another. A fundamental question naturally arises: what does the collection of all possible outputs look like? Is it a chaotic assortment of points, or does it possess a predictable and elegant structure? This set of all achievable destinations is known as the ​​range​​ of the transformation, and understanding its properties is key to unlocking the transformation's core identity. This article addresses the knowledge gap between viewing a transformation as a mere calculation and understanding it as a geometric and structural operation.

This article will guide you through the essential concepts surrounding the range of a linear transformation. In the "Principles and Mechanisms" chapter, we will define the range, reveal its profound connection to the column space of a matrix, and explore its geometric nature as a subspace. We will also introduce the Rank-Nullity Theorem, a fundamental principle that governs the dimensions of the spaces involved. Subsequently, the "Applications and Interdisciplinary Connections" chapter will demonstrate how these theoretical ideas manifest in tangible applications, from the physics of rotating objects to the abstract worlds of polynomial and matrix spaces, ultimately connecting the range to a matrix's eigenvalues.

Principles and Mechanisms

Imagine you have a machine, a kind of magical function box. You feed it an object—let’s say a vector—and it spits out a new one. A linear transformation is just such a machine, but one that obeys a couple of very simple, very strict rules. If you’ve just been introduced to these transformations, you might be wondering: If I can put any vector from my input space into this machine, what does the collection of all possible output vectors look like? Is it a random cloud of points? Does it fill the entire output space? Or does it have some specific, elegant structure? This collection of all possible destinations is what we call the ​​range​​ of the transformation, and understanding it is like understanding the very soul of the machine.

What is the Range? The Set of All Possible Destinations

Let's think about a linear transformation TTT that takes vectors from some input space (the domain) and maps them to an output space (the codomain). The range of TTT, sometimes called its image, is simply the set of all vectors T(x)T(\mathbf{x})T(x) you can get by feeding every possible input vector x\mathbf{x}x into the machine.

This isn't just an abstract idea. Consider a computer graphics pipeline that maps 2D coordinates (x,y)(x, y)(x,y) from a flat texture file into 3D positions in a virtual world. A simple linear transformation for this might be T(x,y)=(x−y,x+y,2x)T(x, y) = (x-y, x+y, 2x)T(x,y)=(x−y,x+y,2x). If you take every single point from your 2D texture, where do they all land in the 3D world? Do they fill up all of 3D space? The answer, perhaps surprisingly, is no. As it turns out, all these output points lie on a perfectly flat plane that passes through the origin. This plane is the range of the transformation.

This tells us something profound right away: the range isn't just a jumble of points. It has structure. In fact, for any linear transformation, the range is always a special kind of set called a ​​subspace​​. This means it's a "flat" entity (like a point, a line, or a plane) that cuts right through the origin of the output space. Even if the transformation is more exotic, say, mapping polynomials to other polynomials, the range still retains this structured nature. For instance, a transformation on polynomials might only ever produce outputs of the form a+bt+2bt2a+bt+2bt^2a+bt+2bt2, meaning the coefficient of the t2t^2t2 term is always exactly twice the coefficient of the ttt term. Any polynomial not satisfying this rule is simply not a possible destination; it is outside the range.

The Column Space: Unveiling the Machine's Blueprint

So, the range is a subspace. That's a nice geometric insight, but how do we find it? If our transformation is represented by a matrix AAA, such that T(x)=AxT(\mathbf{x}) = A\mathbf{x}T(x)=Ax, do we have to plug in infinitely many x\mathbf{x}x's to trace out the range? Fortunately, no. The secret is locked inside the matrix itself.

Here is one of the most fundamental ideas in all of linear algebra: the matrix-vector product AxA\mathbf{x}Ax is, by its very definition, a ​​linear combination​​ of the columns of the matrix AAA. The components of the vector x\mathbf{x}x are nothing more than the weights, or ingredients, you use in your recipe to mix the columns.

Let's say the columns of AAA are a1,a2,…,an\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_na1​,a2​,…,an​ and your input vector is x=(x1,x2,…,xn)\mathbf{x} = (x_1, x_2, \dots, x_n)x=(x1​,x2​,…,xn​). Then the output is:

T(x)=Ax=x1a1+x2a2+⋯+xnanT(\mathbf{x}) = A\mathbf{x} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \dots + x_n \mathbf{a}_nT(x)=Ax=x1​a1​+x2​a2​+⋯+xn​an​

When you look at it this way, the answer to our question becomes wonderfully clear. The set of all possible outputs (the range) is simply the set of all possible linear combinations of the columns of AAA. And what do we call the set of all linear combinations of a set of vectors? We call it their ​​span​​. So, the range of the transformation TTT is precisely the span of the columns of AAA, a space we call the ​​column space​​ of AAA.

This is a beautiful and powerful connection. The abstract, operational idea of a range is made tangible and computable by the static, structural idea of a column space. The columns of the matrix are the fundamental building blocks, and the range is the entire structure—be it a line, a plane, or something bigger—that you can build with them. If your "building blocks" (the columns) are all zero vectors, then the only thing you can build is the zero vector itself. The range is just the origin, a subspace of dimension 0. If the columns are linearly dependent, say one is a combination of the others, then it adds nothing new to the span. The dimension of your structure is determined only by the number of linearly independent columns.

The Geometry of the Range: Lines, Planes, and Subspaces

Thinking of the range as the span of the column vectors gives us a powerful geometric intuition. The ​​dimension​​ of the range, which we call the ​​rank​​ of the transformation (or matrix), is simply the number of linearly independent columns.

  • If a transformation T:R2→R3T: \mathbb{R}^2 \to \mathbb{R}^3T:R2→R3 has a matrix whose two columns are linearly independent, the range will be the plane spanned by those two vectors—a 2-dimensional subspace of R3\mathbb{R}^3R3.

  • What if we chain transformations together? Imagine a machine T1T_1T1​ that first projects every vector in 3D space onto a specific line, and then a second machine T2T_2T2​ that rotates whatever it receives. The range of the first machine, T1T_1T1​, is the line itself. This line then becomes the input for the second machine, T2T_2T2​. When you rotate a line, you get... another line. So the range of the composite transformation T2∘T1T_2 \circ T_1T2​∘T1​ is the rotated line. The range of the composition is the image of the range of the first map under the second map.

This geometric viewpoint is incredibly powerful. It allows us to reason about the behavior of complex operations by visualizing their effect on simple shapes like lines and planes.

The Rank-Nullity Theorem: A Cosmic Accounting Principle

Now for a truly remarkable idea. Every transformation does two things: it preserves some information and it discards some information. The range represents what is preserved. What about what's discarded? That would be the set of all input vectors that the machine crushes down to the single output vector 0\mathbf{0}0. We call this set the ​​kernel​​ or ​​null space​​ of the transformation.

You might sense that there's a trade-off here. If a transformation crushes a large part of its input space (a large kernel), you'd expect its variety of outputs to be smaller (a small range). Conversely, if it crushes almost nothing (a tiny kernel), it should produce a rich variety of outputs (a large range). This intuition is perfectly captured by one of the most elegant results in linear algebra: the ​​Rank-Nullity Theorem​​.

For a transformation TTT from a finite-dimensional vector space VVV, the theorem states:

dim⁡(V)=dim⁡(kernel(T))+dim⁡(range(T))\dim(V) = \dim(\text{kernel}(T)) + \dim(\text{range}(T))dim(V)=dim(kernel(T))+dim(range(T))

In simpler terms: the dimension of the input space equals the dimension of what gets crushed (the ​​nullity​​) plus the dimension of what comes out (the ​​rank​​). Not a single dimension is lost in the accounting.

Let's see this "cosmic accounting principle" in action. Consider a transformation from R3\mathbb{R}^3R3 to R3\mathbb{R}^3R3 that first projects every vector onto the xyxyxy-plane and then rotates it. Any vector on the zzz-axis, like (0,0,z)(0, 0, z)(0,0,z), gets projected to the origin and stays there. The entire zzz-axis is crushed to zero, so the kernel is the zzz-axis, a space of dimension 1. The output of the transformation is always a vector in the xyxyxy-plane, so the range is the xyxyxy-plane, a space of dimension 2. The input space was R3\mathbb{R}^3R3, with dimension 3. And lo and behold: 3(input dim)=1(nullity)+2(rank)3 (\text{input dim}) = 1 (\text{nullity}) + 2 (\text{rank})3(input dim)=1(nullity)+2(rank).

The theorem's predictive power is stunning. Imagine a signal filter that takes 5-dimensional vectors as input. We are told that it completely nullifies any input vector satisfying two specific linear constraints. By analyzing these constraints, we can find that the space of nullified inputs—the kernel—has a dimension of 3. The Rank-Nullity Theorem then tells us, without us needing to know anything else about the filter, that the dimension of its range must be 5−3=25 - 3 = 25−3=2. Similarly, if we know a transformation from R4\mathbb{R}^4R4 to R3\mathbb{R}^3R3 has a range that is a plane (dimension 2), the theorem immediately forces the kernel to have dimension 4−2=24 - 2 = 24−2=2.

Range, Rank, and Reality: When is Everything Reachable?

The concept of the range culminates in answering a very practical question: for a transformation T:Rn→RnT: \mathbb{R}^n \to \mathbb{R}^nT:Rn→Rn, when is it possible to reach every destination in the output space? When is the range the entire space Rn\mathbb{R}^nRn? Such a transformation is called ​​onto​​ or ​​surjective​​.

For this to happen, the dimension of the range—the rank—must be nnn. According to our cosmic accounting principle, if the rank is nnn, then the nullity must be n−n=0n - n = 0n−n=0. A nullity of 0 means the kernel contains only the zero vector. This, in turn, means the transformation is ​​one-to-one​​ (or injective).

For square matrices, this is where everything comes together in a grand synthesis known as the ​​Invertible Matrix Theorem​​. For an n×nn \times nn×n matrix AAA, the following statements are either all true or all false:

  • The range of the transformation is all of Rn\mathbb{R}^nRn (it is onto).
  • The rank of AAA is nnn.
  • The columns of AAA are linearly independent and span Rn\mathbb{R}^nRn.
  • The kernel of the transformation is just {0}\{\mathbf{0}\}{0} (it is one-to-one).
  • The determinant of AAA is non-zero.
  • The matrix AAA is invertible.
  • The matrix AAA can be expressed as a product of elementary matrices.

This theorem is the linchpin of linear algebra. It tells us that a seemingly simple property of the range—whether it fills the whole space—is profoundly linked to all these other crucial properties. If you know that the rank of a 3×33 \times 33×3 matrix is only 2, you know immediately that its range is not all of R3\mathbb{R}^3R3, its kernel is non-trivial, and its determinant must be zero. This allows you to solve for unknown parameters within the matrix that would force this condition.

Even more strikingly, if you know that the range of a transformation T:R4→R4T: \mathbb{R}^4 \to \mathbb{R}^4T:R4→R4 is constrained—for example, if every output vector must be orthogonal to a certain fixed vector—then you know its range cannot be all of R4\mathbb{R}^4R4. Its rank must be less than 4. From this single fact about the range, the whole house of cards of invertibility tumbles down: the determinant is zero, and the matrix cannot be written as a product of elementary matrices.

So, the range is far more than a simple list of outputs. It is a geometric object with a deep algebraic structure, a key player in the fundamental budget of dimensions, and a powerful diagnostic tool that reveals the innermost character of a linear transformation. By understanding the range, we truly begin to understand the machine itself.

Applications and Interdisciplinary Connections

In our previous discussion, we sketched out the abstract idea of a linear transformation's range—the set of all possible outcomes, a "landing zone" within the target vector space. You might be tempted to think of this as just a collection of vectors, perhaps a messy splash of points. But that couldn't be further from the truth. The range is a beautiful, structured subspace in its own right. It possesses a specific geometry, a definite dimension, and understanding it unlocks profound insights into everything from the motion of a spinning top to the fundamental nature of matrices themselves.

Let's embark on a journey to see just how powerful and far-reaching this idea of a "space of possibilities" truly is. We will see that by understanding what a transformation can do, we also learn an immense amount about its deepest inner workings.

The Geometry of the Possible

The most intuitive place to start is with the vectors we can see and draw in two or three-dimensional space. Imagine a transformation that takes vectors from a 2D plane, R2\mathbb{R}^2R2, and maps them into 3D space, R3\mathbb{R}^3R3. As we've learned, the range of such a transformation is spanned by the columns of its matrix representation. If the transformation matrix has two linearly independent columns, these two vectors in R3\mathbb{R}^3R3 define a plane passing through the origin. This plane is the range.

Think about what this means. The transformation takes the entire infinite plane of R2\mathbb{R}^2R2 and carefully lays it down as a new, tilted plane within the vastness of R3\mathbb{R}^3R3. Every single vector the transformation can possibly create, no matter what input you feed it, will lie perfectly on this plane. The transformation is fundamentally constrained to live and breathe within this two-dimensional subspace.

We can go even further than just visualizing this plane; we can describe it with a precise mathematical equation. By representing any point in the range as a linear combination of the column vectors, we can find a relationship between its coordinates—say, xxx, yyy, and zzz. This process reveals the implicit equation of the plane, something like z=2y−xz = 2y - xz=2y−x. This isn't just a mathematical exercise; it's a way of giving a concrete identity to the world of the transformation's outputs.

The Physics of Motion and Constraint

This idea of a constrained output space is not just a geometric curiosity. It is a fundamental principle in the physical world. Consider the physics of a rigid body, like a spinning top or a planet, rotating with a constant angular velocity ω\boldsymbol{\omega}ω around an axis passing through the origin. The velocity v\mathbf{v}v of any point on the body is determined by its position vector r\mathbf{r}r through the cross product: v=ω×r\mathbf{v} = \boldsymbol{\omega} \times \mathbf{r}v=ω×r.

This relationship defines a linear transformation T(r)=ω×rT(\mathbf{r}) = \boldsymbol{\omega} \times \mathbf{r}T(r)=ω×r. Let's ask our key questions: What is the kernel, and what is the range?

The kernel consists of all points r\mathbf{r}r that are mapped to zero velocity. Physically, these are the stationary points. The cross product ω×r\boldsymbol{\omega} \times \mathbf{r}ω×r is zero only when r\mathbf{r}r is parallel to ω\boldsymbol{\omega}ω. This means the kernel is the line of vectors pointing along the angular velocity vector ω\boldsymbol{\omega}ω—it's the axis of rotation! This is perfectly intuitive: the points on the axis don't move.

Now, what about the range? The range is the set of all possible velocity vectors. A key property of the cross product is that the result, ω×r\boldsymbol{\omega} \times \mathbf{r}ω×r, is always perpendicular to both ω\boldsymbol{\omega}ω and r\mathbf{r}r. This means every possible velocity vector v\mathbf{v}v must be perpendicular to the axis of rotation ω\boldsymbol{\omega}ω. The set of all vectors perpendicular to ω\boldsymbol{\omega}ω forms a plane through the origin. This plane is the range of the transformation.

This is a beautiful result. No matter which point on the spinning top you choose, its velocity vector will always lie in the plane perpendicular to the axis of rotation. The physics of rotation constrains the possible outcomes to this specific subspace. The range of the transformation perfectly captures this physical constraint.

Unseen Worlds: Ranges in Abstract Spaces

We've had fun with vectors we can see and draw. But the true power and unity of linear algebra come from applying these same ideas to more abstract "vectors," like polynomials and matrices.

Let's consider the vector space of polynomials, P3P_3P3​, which contains all polynomials of degree at most 3. A transformation TTT might act on a polynomial p(t)p(t)p(t) via differentiation, such as T(p(t))=tp′(t)−p(t)T(p(t)) = t p'(t) - p(t)T(p(t))=tp′(t)−p(t). If we take a generic polynomial p(t)=a+bt+ct2+dt3p(t) = a + bt + ct^2 + dt^3p(t)=a+bt+ct2+dt3 and apply the transformation, a funny thing happens: the term with ttt always vanishes, leaving an output of the form −a+ct2+2dt3-a + ct^2 + 2dt^3−a+ct2+2dt3. This tells us that the range, the set of all possible output polynomials, is spanned by the polynomials {1,t2,t3}\{1, t^2, t^3\}{1,t2,t3}. The transformation can never produce a polynomial with just a linear ttt term. The range is a specific three-dimensional subspace within the four-dimensional space P3P_3P3​.

Sometimes, calculating the range directly is difficult. This is where the magnificent Rank-Nullity Theorem comes to our aid. It tells us that the dimension of the domain is perfectly split between the dimension of the kernel (what gets crushed to zero) and the dimension of the range (the size of the output space).

Consider a transformation on polynomials defined by an integral: T(p)(x)=∫0x(p(t)−p(0)) dtT(p)(x) = \int_0^x (p(t) - p(0)) \, dtT(p)(x)=∫0x​(p(t)−p(0))dt for p(t)p(t)p(t) in P3(R)P_3(\mathbb{R})P3​(R). Figuring out the range directly seems complicated. But finding the kernel is easy! T(p)=0T(p) = 0T(p)=0 means that the integral is always zero. Differentiating both sides tells us p(t)−p(0)=0p(t) - p(0) = 0p(t)−p(0)=0, or p(t)=p(0)p(t) = p(0)p(t)=p(0). This means p(t)p(t)p(t) must be a constant polynomial. The kernel is the one-dimensional space of constant polynomials. Since the domain P3(R)P_3(\mathbb{R})P3​(R) has dimension 4, the Rank-Nullity Theorem immediately tells us that the dimension of the range must be 4−1=34 - 1 = 34−1=3. Without breaking a sweat, we've determined the size of the output space!

This principle is universal. It works just as well in the vector space of matrices. Let's take the space of all 2×32 \times 32×3 matrices, which has dimension 2×3=62 \times 3 = 62×3=6. If we are told that a linear transformation LLL acting on this space has a kernel of dimension 2, we instantly know that its range must have dimension 6−2=46 - 2 = 46−2=4. Or consider a transformation on the 9-dimensional space of 3×33 \times 33×3 matrices, T(A)=A+AT−(tr(A))IT(A) = A + A^T - (\text{tr}(A))IT(A)=A+AT−(tr(A))I. By identifying its kernel as the 3-dimensional space of skew-symmetric matrices, we deduce that its range must have dimension 9−3=69 - 3 = 69−3=6. This happens to be the exact dimension of the subspace of symmetric matrices, which, it turns out, is precisely the range of this transformation. The theorem gives us a powerful shortcut to characterizing the space of possibilities.

The Deep Connection: Range, Rank, and Eigenvalues

We have seen the Rank-Nullity Theorem, dim⁡(domain)=dim⁡(kernel)+dim⁡(range)\dim(\text{domain}) = \dim(\text{kernel}) + \dim(\text{range})dim(domain)=dim(kernel)+dim(range), as an incredibly useful computational tool. But its meaning is deeper. It is a fundamental "conservation law" for dimension. A transformation takes the dimensions of its domain and partitions them: some are collapsed into the kernel, and the rest are preserved to form the range.

Now for the final, beautiful connection. What is the kernel, really? It is the set of all vectors v\mathbf{v}v such that T(v)=0T(\mathbf{v}) = \mathbf{0}T(v)=0. If the transformation is represented by a matrix AAA, this is Av=0vA\mathbf{v} = 0 \mathbf{v}Av=0v. The kernel is nothing more than the eigenspace corresponding to the eigenvalue λ=0\lambda = 0λ=0.

This observation forges a profound link between the geometry of the transformation and its algebraic DNA—its eigenvalues. The Rank-Nullity theorem can be rephrased:

dim⁡(domain)=(geometric multiplicity of λ=0)+dim⁡(range)\dim(\text{domain}) = (\text{geometric multiplicity of } \lambda=0) + \dim(\text{range})dim(domain)=(geometric multiplicity of λ=0)+dim(range)

For the special case of diagonalizable matrices, where the geometric and algebraic multiplicities of eigenvalues are identical, this connection becomes even more striking. The dimension of the range (the rank) tells you exactly how many non-zero eigenvalues the matrix has, and the dimension of the kernel (the nullity) tells you how many zero eigenvalues it has.

Suppose you have a 5×55 \times 55×5 diagonalizable matrix. You are told that its range is a two-dimensional subspace. This means its rank is 2. The Rank-Nullity theorem insists that its nullity must be 5−2=35 - 2 = 35−2=3. Since the matrix is diagonalizable, this nullity of 3 means the eigenvalue 0 must have an algebraic multiplicity of 3. In other words, the matrix must have exactly three eigenvalues equal to zero. Just by knowing the "size" of the transformation's output space, we have deduced a critical feature of its internal algebraic structure.

From a simple plane in 3D space, to the constraints on a spinning planet, to the hidden structures in spaces of functions and matrices, and finally to the very heart of a matrix's eigenvalues—the concept of the range is a golden thread. It weaves together geometry, physics, and algebra, revealing that in mathematics, as in nature, the question of what is possible is one of the most powerful questions we can ask.