try ai
Popular Science
Edit
Share
Feedback
  • Gelfand Theory

Gelfand Theory

SciencePediaSciencePedia
Key Takeaways
  • Gelfand theory transforms elements of an abstract commutative algebra into continuous functions on a specially constructed geometric space known as the character space.
  • The theory provides a powerful bridge between algebra and geometry, equating an element's algebraic spectrum with the range of its corresponding function (its Gelfand transform).
  • It reveals deep connections within mathematics, showing that Fourier analysis is a special instance of the Gelfand transform applied to convolution algebras.
  • The framework has practical applications in engineering, providing a simple method to determine a system's invertibility by analyzing the zeros of its frequency response.

Introduction

The world of abstract algebra, with its complex rules and intangible objects, can often seem disconnected from our intuitive understanding of space and functions. What if there were a way to translate these abstract structures into a more familiar, visual language? This is precisely the power of Gelfand theory, a cornerstone of modern analysis that builds a profound bridge between algebra and geometry. It provides a "pair of glasses" that allows us to see many abstract algebras for what they truly are: algebras of continuous functions on a hidden geometric space. This transformation not only demystifies algebraic concepts but also provides elegant solutions to complex problems.

This article explores the principles and power of this remarkable theory. In the first chapter, "Principles and Mechanisms," we will delve into the core machinery of Gelfand theory. We will discover the concepts of characters and the character space, and see how the Gelfand transform works to translate abstract elements into concrete functions. Subsequently, in "Applications and Interdisciplinary Connections," we will witness this theory in action, exploring how it unifies concepts in Fourier analysis, provides critical tools for engineering and signal processing, and even lays the groundwork for understanding the geometry of the quantum world.

Principles and Mechanisms

Imagine you are handed a strange, abstract object. You can combine these objects, add them, and multiply them according to a set of rules, but you have no idea what they are. This is the world of abstract algebra. Now, what if you had a magical pair of glasses that, when you put them on, revealed that these mysterious objects were, in fact, just familiar things in disguise—like continuous functions on some geometric space? All the abstract rules of addition and multiplication would suddenly become the simple, pointwise addition and multiplication of functions you learned in calculus. This is the magic of Gelfand theory. It provides the spectacles for looking at a vast class of commutative algebras and seeing them for what they truly are: algebras of functions.

Our journey is to understand how these magical glasses work. The secret lies in finding the hidden "geometric space" for our algebra and figuring out how to translate each abstract element into a concrete function on that space.

The Alchemist's Stone: Characters

To find the hidden space, we need a special kind of probe. This probe is called a ​​character​​. A character is a map, let's call it ϕ\phiϕ, that takes an element from our algebra, xxx, and assigns to it a complex number, ϕ(x)\phi(x)ϕ(x). But it's not just any map. It must respect the algebra's structure in a very particular way. It must be a homomorphism, which means for any two elements xxx and yyy in our algebra A\mathcal{A}A:

  1. ϕ(x+y)=ϕ(x)+ϕ(y)\phi(x+y) = \phi(x) + \phi(y)ϕ(x+y)=ϕ(x)+ϕ(y) (It respects addition)
  2. ϕ(xy)=ϕ(x)ϕ(y)\phi(xy) = \phi(x)\phi(y)ϕ(xy)=ϕ(x)ϕ(y) (It respects multiplication)

And, to be interesting, it can't just map everything to zero. So we add a third rule:

  1. ϕ\phiϕ is not the zero map.

The multiplicative property is the alchemist's stone. It's an incredibly strong constraint. Think of an element xxx as a signal and ϕ\phiϕ as a detector that gives you a number. This property says that the number you get from a combined signal xyxyxy is precisely the product of the numbers you get from each signal individually.

Where do we find such things? Let's consider a very simple algebra to see them in their natural habitat. Imagine an "island universe" with only three points, {p1,p2,p3}\{p_1, p_2, p_3\}{p1​,p2​,p3​}. Our algebra, let's call it AAA, will be the set of all possible complex-valued functions on this tiny universe. So an "element" fff in AAA is just a list of three numbers: (f(p1),f(p2),f(p3))(f(p_1), f(p_2), f(p_3))(f(p1​),f(p2​),f(p3​)). Adding and multiplying these elements is just doing it pointwise, as you'd expect.

What are the characters of this algebra? It turns out they are completely intuitive. One character is the "evaluation at p1p_1p1​" map, ϕ1(f)=f(p1)\phi_1(f) = f(p_1)ϕ1​(f)=f(p1​). Let's check: it obviously respects addition, and for multiplication, ϕ1(fg)=(fg)(p1)=f(p1)g(p1)=ϕ1(f)ϕ1(g)\phi_1(fg) = (fg)(p_1) = f(p_1)g(p_1) = \phi_1(f)\phi_1(g)ϕ1​(fg)=(fg)(p1​)=f(p1​)g(p1​)=ϕ1​(f)ϕ1​(g). It works perfectly! Similarly, evaluation at p2p_2p2​ and p3p_3p3​ are also characters. And that’s it. There are only three characters for this algebra, one for each point in our space. Each character's job is simply to "read off" the value of the function at a specific point.

Not every map that seems reasonable is a character. For instance, what about a map that averages the values at two points, like ψ(f)=12(f(p1)+f(p2))\psi(f) = \frac{1}{2}(f(p_1) + f(p_2))ψ(f)=21​(f(p1​)+f(p2​))? This map respects addition, but it fails catastrophically at multiplication. It is not a character because the average of a product is not, in general, the product of the averages. This strict multiplicative requirement is what makes characters the perfect probes for the algebra's structure.

The Hidden Landscape: The Character Space

The collection of all characters of an algebra A\mathcal{A}A is a thing in itself. We call it the ​​character space​​ or ​​maximal ideal space​​ of A\mathcal{A}A, denoted Δ(A)\Delta(\mathcal{A})Δ(A). For our simple three-point algebra, the character space was just a set of three "points," corresponding exactly to the original space {p1,p2,p3}\{p_1, p_2, p_3\}{p1​,p2​,p3​}. When our algebra is already given to us as the functions on a space XXX (like C(X)C(X)C(X)), its character space is just a copy of XXX itself. The algebra wears its geometry on its sleeve.

But the real magic happens when the algebra is more abstract and its underlying geometry is hidden. Consider the algebra AAA of all even continuous functions on the interval [−1,1][-1, 1][−1,1]. An even function is one where f(x)=f(−x)f(x) = f(-x)f(x)=f(−x). What is the character space here? A character ϕ\phiϕ must assign a number to each even function. It turns out that here, too, the characters are just point evaluations. For any point t∈[−1,1]t \in [-1, 1]t∈[−1,1], the map ϕt(f)=f(t)\phi_t(f) = f(t)ϕt​(f)=f(t) is a character. But wait! Since every function fff in our algebra is even, f(t)=f(−t)f(t) = f(-t)f(t)=f(−t). This means that the character ϕt\phi_tϕt​ is exactly the same map as the character ϕ−t\phi_{-t}ϕ−t​. The algebra itself cannot tell the difference between the point ttt and the point −t-t−t.

So, what is the space of distinct characters? We have one character for t=0t=0t=0, and for every ttt in (0,1](0, 1](0,1], we have a single character that corresponds to both ttt and −t-t−t. The character space effectively "folds" the interval [−1,1][-1, 1][−1,1] in half at the origin. The resulting space is, topologically, just the interval [0,1][0, 1][0,1]. Gelfand theory has revealed the true, hidden geometric landscape on which this algebra "lives"—not [−1,1][-1, 1][−1,1], but [0,1][0, 1][0,1].

The Gelfand Transform: A Universal Representation

Now that we have our hidden space Δ(A)\Delta(\mathcal{A})Δ(A), we can finally build our magical glasses. For any element xxx in our abstract algebra A\mathcal{A}A, we can define its ​​Gelfand transform​​, x^\hat{x}x^, to be a function on the character space. How? Simple: the value of the function x^\hat{x}x^ at a character ϕ\phiϕ is just the number that ϕ\phiϕ assigns to xxx.

x^(ϕ)=ϕ(x)\hat{x}(\phi) = \phi(x)x^(ϕ)=ϕ(x)

This is the central construction. We have transformed the abstract element xxx into a concrete, complex-valued function x^\hat{x}x^ on the space Δ(A)\Delta(\mathcal{A})Δ(A). The map x↦x^x \mapsto \hat{x}x↦x^ is the Gelfand transform. Astonishingly, this transformation respects the algebra's structure perfectly. The transform of x+yx+yx+y is the function x+y^\widehat{x+y}x+y​, which by the property of characters is just x^+y^\hat{x}+\hat{y}x^+y^​. And the transform of xyxyxy is xy^\widehat{xy}xy​, which is just x^y^\hat{x}\hat{y}x^y^​.

The Gelfand-Naimark theorem tells us that for a very large and important class of algebras (C*-algebras), this transformation is an isomorphism. It's a perfect, one-to-one translation. The abstract algebra A\mathcal{A}A and the function algebra C(Δ(A))C(\Delta(\mathcal{A}))C(Δ(A)) are one and the same, just viewed from different perspectives.

The Other Side of the Coin: Maximal Ideals

We've been calling Δ(A)\Delta(\mathcal{A})Δ(A) the "character space," but it also goes by another, more algebraic name: the "maximal ideal space." Why? This reveals a beautiful duality at the heart of mathematics.

For any character ϕ\phiϕ, its ​​kernel​​, ker⁡(ϕ)\ker(\phi)ker(ϕ), is the set of all elements that ϕ\phiϕ sends to zero. This kernel is not just any old set; it's a ​​maximal ideal​​. An ideal is a subset of an algebra that's "sticky"—if you multiply an element inside the ideal by any element from the whole algebra, the result is still stuck inside the ideal. A "maximal" ideal is an ideal that is as large as it can be without being the entire algebra itself.

There is a one-to-one correspondence between characters and maximal ideals. Each character ϕ\phiϕ uniquely defines a maximal ideal ker⁡(ϕ)\ker(\phi)ker(ϕ). This is why the definitional quirk that a character cannot be the zero map is so important. If we allowed the zero map, its kernel would be the entire algebra, which is not a maximal ideal by definition. Excluding it preserves this perfect duality.

This connection gives us another way to think about things. Let's take the algebra of continuous functions on [0,1][0,1][0,1], which we'll call AAA. The set of all functions that are zero at the point x=1/3x=1/3x=1/3 forms a maximal ideal, MMM. What happens if we look at the algebra "modulo" this ideal, written A/MA/MA/M? This means we treat any two functions as equivalent if their difference is in MMM—that is, if they have the same value at 1/31/31/3. The algebra of these equivalence classes, A/MA/MA/M, turns out to be isomorphic to the complex numbers C\mathbb{C}C. And the isomorphism is exactly the evaluation map f↦f(1/3)f \mapsto f(1/3)f↦f(1/3)! This shows that performing abstract algebra in the quotient space A/MA/MA/M is literally the same as performing simple arithmetic on the function values at that one special point. The maximal ideal MMM is the point 1/31/31/3 in disguise.

The Power of the Picture

This new perspective—seeing algebras as functions on their character spaces—is incredibly powerful. It allows us to solve difficult problems by translating them into a more intuitive setting.

​​The Spectrum Revealed:​​ What is the range of values taken by the function x^\hat{x}x^? It is a fundamental theorem that this range is precisely the ​​spectrum​​ of the element xxx, denoted σ(x)\sigma(x)σ(x). The spectrum is an algebraic concept: it's the set of all complex numbers λ\lambdaλ for which the element x−λex - \lambda ex−λe (where eee is the identity) does not have a multiplicative inverse. Gelfand theory turns this abstract algebraic property into a simple geometric one: the image of a function. This connection is used to prove the spectacular ​​Gelfand-Mazur theorem​​. The theorem states that if a Banach algebra is also a field (meaning every non-zero element has an inverse), it must be isomorphic to the complex numbers. The proof is beautifully simple with our new tools: for any xxx, we know its spectrum σ(x)\sigma(x)σ(x) is non-empty. Let λ∈σ(x)\lambda \in \sigma(x)λ∈σ(x). Then x−λex-\lambda ex−λe is not invertible. But in a field, the only non-invertible element is zero! So, x−λe=0x - \lambda e = 0x−λe=0, which means x=λex = \lambda ex=λe. Every element in the algebra is just a scalar multiple of the identity. The entire algebra is just a copy of C\mathbb{C}C.

​​Applying Functions to Operators:​​ This picture allows us to do things that seem impossible. How can you take a continuous function, like f(t)=tf(t) = \sqrt{t}f(t)=t​, and apply it to an abstract operator TTT? Gelfand theory provides the answer. We consider the algebra generated by TTT. The character space of this algebra turns out to be the spectrum of the operator, σ(T)\sigma(T)σ(T). The Gelfand transform turns the operator TTT into the simple identity function T^(λ)=λ\hat{T}(\lambda) = \lambdaT^(λ)=λ on its spectrum. Now, to "compute" f(T)f(T)f(T), we just apply the function fff to the Gelfand transform T^\hat{T}T^, and then transform back. The operator f(T)f(T)f(T) is defined as the unique operator whose Gelfand transform is the function f(λ)f(\lambda)f(λ). This "functional calculus" is a cornerstone of modern analysis, and Gelfand theory provides its most elegant and natural formulation.

​​Classifying Algebras:​​ Finally, the Gelfand representation is a powerful classification tool. Are two algebras fundamentally the same (isomorphic)? To answer this, we can just look at their character spaces. If the character spaces are not topologically the same (not homeomorphic), then the algebras cannot be isomorphic. For example, the algebra of continuous functions on a circle, C(S1)C(S^1)C(S1), seems similar to the "disk algebra" A(D)A(\mathbb{D})A(D) of functions that are continuous on the closed disk and holomorphic inside. But their character spaces are the circle S1S^1S1 and the disk D\mathbb{D}D, respectively. A circle is not homeomorphic to a disk (one has a hole, the other doesn't). Therefore, the algebras are fundamentally different. This topological difference even has an observable algebraic consequence: in the disk algebra, every invertible element can be continuously deformed to the identity element, but in the circle algebra, this is not true (the function f(z)=zf(z)=zf(z)=z cannot be).

From a simple idea—a map that respects multiplication—Gelfand theory builds a bridge between the abstract world of algebra and the visual, geometric world of functions on spaces. It uncovers hidden structures, proves deep theorems with stunning simplicity, and unifies vast areas of mathematics. It truly gives us a new way to see.

Applications and Interdisciplinary Connections

We have journeyed through the elegant machinery of Gelfand theory, seeing how it transforms the often-impenetrable world of abstract algebras into the more familiar landscape of functions on topological spaces. You might be tempted to think of this as a clever, but purely mathematical, sleight of hand. Nothing could be further from the truth. This transformation is not just a party trick; it is a profoundly powerful lens through which we can understand, and solve, problems across an astonishing spectrum of scientific and engineering disciplines. Having learned the principles, let us now embark on a tour of Gelfand theory in action, to see how this beautiful idea bridges worlds.

The Spectrum Revealed: What an Algebra "Looks Like"

The most immediate gift of Gelfand theory is a new way to think about the "spectrum" of an algebraic element. In the previous chapter, we defined the spectrum σ(a)\sigma(a)σ(a) as the set of complex numbers λ\lambdaλ for which the element a−λ1a - \lambda\mathbf{1}a−λ1 has no inverse. This is a purely algebraic definition. But what does it mean?

Let’s start with the simplest, most intuitive commutative algebra: the algebra A=C(X)A = C(X)A=C(X) of continuous complex-valued functions on a compact space XXX. Gelfand's framework tells us something that is at once simple and profound: the character space of C(X)C(X)C(X) is nothing but the space XXX itself. Each "character" ϕ\phiϕ is simply an evaluation at a point t∈Xt \in Xt∈X, so that ϕt(f)=f(t)\phi_t(f) = f(t)ϕt​(f)=f(t). The Gelfand transform of a function fff is... well, it's just the function fff itself!

What does this tell us about invertibility? A function f∈C(X)f \in C(X)f∈C(X) has a multiplicative inverse g=1/fg = 1/fg=1/f if and only if fff is never zero on XXX. If it were zero somewhere, say at t0t_0t0​, then f(t0)g(t0)f(t_0)g(t_0)f(t0​)g(t0​) would have to be 000, but it must also be 111, which is impossible. So, the question of whether f−λ1f - \lambda\mathbf{1}f−λ1 is invertible is simply the question of whether the function f(t)−λf(t) - \lambdaf(t)−λ ever takes the value zero. This happens precisely when λ=f(t)\lambda = f(t)λ=f(t) for some t∈Xt \in Xt∈X. Therefore, the spectrum of a function is simply its range! For instance, if we take the function f(t)=sin⁡(t)f(t) = \sin(t)f(t)=sin(t) on the interval [0,2π][0, 2\pi][0,2π], its spectrum is the set of all values it can take, which is the interval [−1,1][-1, 1][−1,1]. This abstract algebraic concept of a spectrum collapses into a familiar, concrete property of a function.

This identification of characters with point evaluations means that any question about the collective behavior of characters can be translated into a question about the function on its domain. If we want to find the smallest possible value of ∣ϕ(f)∣|\phi(f)|∣ϕ(f)∣ over all characters ϕ\phiϕ, for f∈C([−1,1])f \in C([-1, 1])f∈C([−1,1]), we are simply looking for the minimum value of ∣f(t)∣|f(t)|∣f(t)∣ as ttt ranges over the interval [−1,1][-1, 1][−1,1]—a standard problem from calculus. The theory provides a beautiful dictionary, translating abstract algebraic grammar into the familiar language of functions and spaces.

The Harmony of the Universe: Gelfand Theory and Fourier Analysis

The real magic begins when we apply Gelfand theory to algebras that are not so obviously about functions. Consider the set of all absolutely summable sequences on the integers, ℓ1(Z)\ell^1(\mathbb{Z})ℓ1(Z). This forms a Banach algebra where the "multiplication" is not pointwise, but the more mysterious operation of convolution. What could the character space of this algebra possibly be?

The answer is stunning: the character space of the convolution algebra ℓ1(Z)\ell^1(\mathbb{Z})ℓ1(Z) is the unit circle, S1={z∈C:∣z∣=1}S^1 = \{z \in \mathbb{C} : |z|=1\}S1={z∈C:∣z∣=1}. And what is the Gelfand transform? For a sequence f=(fn)f=(f_n)f=(fn​), its transform is the function f^(z)=∑n∈Zfnzn\hat{f}(z) = \sum_{n \in \mathbb{Z}} f_n z^nf^​(z)=∑n∈Z​fn​zn for zzz on the unit circle. This is precisely the formula for a Fourier series! Gelfand theory reveals that the abstract algebra of sequences under convolution is secretly the algebra of continuous functions on the circle, where convolution in the sequence space becomes simple, pointwise multiplication in the function space.

This is a monumental insight. It means we can determine if a sequence is invertible by checking if its Fourier series ever vanishes on the unit circle. We can calculate the spectrum of a sequence by finding the range of values its Fourier series takes. This powerful connection extends to the continuous world as well. The algebra of integrable functions on the real line, L1(R)L^1(\mathbb{R})L1(R), with convolution as multiplication, has the real line itself as its character space. The Gelfand transform in this setting is none other than the celebrated Fourier transform. The deep unity of mathematics is laid bare: Fourier analysis, a tool indispensable to physics, engineering, and data analysis, is a special case of Gelfand theory.

Engineering a Solution: Invertibility and System Design

This connection to Fourier analysis is not just an academic curiosity; it has profound practical consequences, particularly in signal processing. Imagine a linear, time-invariant (LTI) system—a filter in an audio processor, a channel in a communication system, or a lens in an imaging device. Its behavior is characterized by its "impulse response," a sequence h[n]h[n]h[n] in ℓ1(Z)\ell^1(\mathbb{Z})ℓ1(Z). The output of the system is the convolution of the input signal with this impulse response.

A critical question for an engineer is: can we undo the effect of this system? Can we build an "inverse filter" that restores the original signal? In algebraic terms, this means finding an impulse response g[n]g[n]g[n] such that the convolution h∗gh * gh∗g gives back the original signal, which corresponds to the identity element δ[n]\delta[n]δ[n] (a single pulse at time zero). This is precisely the question of whether hhh is an invertible element in the algebra ℓ1(Z)\ell^1(\mathbb{Z})ℓ1(Z).

Without Gelfand theory, this is a daunting problem. But with it, the answer becomes astonishingly simple, a result known as Wiener's 1/f1/f1/f theorem. A system with an absolutely summable impulse response is invertible if and only if its Gelfand transform—its frequency response H(ejω)H(e^{j\omega})H(ejω)—is never zero for any frequency ω\omegaω. An intractable problem about infinite sums is transformed into the much easier task of checking if a continuous function on a circle has any zeros. This principle is the bedrock of modern system analysis and design. It tells us, for example, that a simple FIR filter (whose impulse response is finite) cannot be undone by another FIR filter unless it's a trivial delay and scaling, because its polynomial Z-transform must have roots somewhere in the complex plane, and if its inverse were also a polynomial, their product couldn't be 1.

The Rhythm of Change: Dynamics, Stability, and the Spectral Radius

Let's shift our gaze from signals to dynamical systems. Consider the evolution of a system described by repeated application of a matrix AAA: a vector v0v_0v0​ becomes v1=Av0v_1 = Av_0v1​=Av0​, then v2=A2v0v_2 = A^2v_0v2​=A2v0​, and so on. A fundamental question is about the long-term behavior: does the vector's magnitude grow exponentially, decay to zero, or stay bounded? The rate of this growth or decay is captured by Lyapunov exponents. The largest of these, the top Lyapunov exponent, is defined by the limit λmax⁡=lim⁡n→∞1nln⁡∥An∥\lambda_{\max} = \lim_{n \to \infty} \frac{1}{n} \ln \|A^n\|λmax​=limn→∞​n1​ln∥An∥.

This expression should ring a bell. In the previous chapter, we met Gelfand's spectral radius formula: ρ(A)=lim⁡n→∞∥An∥1/n\rho(A) = \lim_{n \to \infty} \|A^n\|^{1/n}ρ(A)=limn→∞​∥An∥1/n. The two formulas are practically cousins! Since the logarithm is a continuous function, we can write: λmax⁡=lim⁡n→∞ln⁡(∥An∥1/n)=ln⁡(lim⁡n→∞∥An∥1/n)=ln⁡(ρ(A))\lambda_{\max} = \lim_{n \to \infty} \ln \left( \|A^n\|^{1/n} \right) = \ln \left( \lim_{n \to \infty} \|A^n\|^{1/n} \right) = \ln(\rho(A))λmax​=limn→∞​ln(∥An∥1/n)=ln(limn→∞​∥An∥1/n)=ln(ρ(A)) For a deterministic linear system, the long-term asymptotic growth rate is simply the logarithm of the spectral radius of the matrix governing its evolution. Once again, Gelfand theory provides a remarkable shortcut. Instead of wrestling with the difficult limit of matrix powers, we can find the spectral radius—the magnitude of the largest eigenvalue—and immediately understand the system's stability. The connection also highlights the utility of the Gelfand transform for computing the spectral radius itself. Calculating the limit of ∥an∥1/n\|a^n\|^{1/n}∥an∥1/n can be a combinatorial mess, but finding the maximum value of the Gelfand transform is often far simpler.

Rebuilding Worlds: Recovering Topology from Algebra

We have seen how Gelfand theory takes an algebra and produces a space. The most profound result of all, the Gelfand-Naimark theorem, tells us that for a special class of algebras (commutative C*-algebras), this process can be reversed. The algebraic structure of C(X)C(X)C(X) contains all the information about the topological structure of the space XXX. If you give me the algebra C(X)C(X)C(X) without telling me what XXX is, I can reconstruct XXX (up to homeomorphism). If two such algebras, C(X)C(X)C(X) and C(Y)C(Y)C(Y), are algebraically identical (isomorphic), then the underlying spaces XXX and YYY must be topologically identical (homeomorphic). It’s like being able to reconstruct the precise shape and form of a country just by studying the laws that govern it.

But what about the non-commutative algebras that lie at the heart of quantum mechanics? Here, observables like position and momentum don't commute, and the algebra they form is non-commutative. Can we still extract geometric information? The answer is a resounding yes. Consider the non-commutative algebra of functions from a space XXX into the space of n×nn \times nn×n matrices, C(X,Mn(C))C(X, M_n(\mathbb{C}))C(X,Mn​(C)). If we find that C(X,Mn(C))C(X, M_n(\mathbb{C}))C(X,Mn​(C)) is isomorphic to C(Y,Mn(C))C(Y, M_n(\mathbb{C}))C(Y,Mn​(C)), it turns out that XXX and YYY must still be homeomorphic. The clever trick is to look at the center of the non-commutative algebra—the set of elements that commute with everything. This center forms a commutative subalgebra, which is none other than C(X)C(X)C(X)! The isomorphism between the large non-commutative algebras forces an isomorphism between their centers, and by the classic Gelfand-Naimark theorem, the underlying spaces must be the same.

This idea—that the geometry of a "space" can be encoded in, and recovered from, an algebra of "functions" on it, even a non-commutative one—is the foundational principle of the field of non-commutative geometry. It allows mathematicians and physicists to explore bizarre new "quantum spaces" that defy classical geometric intuition, all by studying the algebras that describe them. The journey that began with translating algebra into functions has come full circle, leading us to define new kinds of geometry through the language of algebra, forever expanding our vision of what a "space" can be.