try ai
Popular Science
Edit
Share
Feedback
  • Berezin Integral

Berezin Integral

SciencePediaSciencePedia
Key Takeaways
  • Berezin integration is a form of calculus based on anticommuting Grassmann variables, which have the nilpotent property (θ2=0\theta^2 = 0θ2=0).
  • The Berezin integral acts as a selection operator, not an area calculation, returning 1 only for the coefficient of the highest-order term in the integrand.
  • Gaussian integrals over Grassmann variables provide an elegant representation for the determinant of a matrix, a cornerstone result in theoretical physics.
  • This framework is the natural language for describing fermions in quantum field theory, enabling calculations of partition functions and propagators.

Introduction

In physics, describing particles like electrons requires adhering to the Pauli exclusion principle, a fundamental rule stating that no two identical fermions can occupy the same quantum state. This antisymmetry poses a significant mathematical challenge, as the commutative algebra of ordinary numbers (xy=yxxy=yxxy=yx) is fundamentally ill-suited to capture a world where swapping two particles introduces a negative sign. How, then, can we build a consistent calculus for these foundational components of matter? This article addresses this gap by introducing Berezin integration, a powerful and elegant formalism built upon the strange rules of anticommuting numbers. We will first explore the core "Principles and Mechanisms," delving into the world of Grassmann variables where squares are zero and integration is a selection rule. Following this, the "Applications and Interdisciplinary Connections" section will reveal how this seemingly abstract calculus provides profound insights and computational power in fields ranging from quantum field theory to pure mathematics, culminating in one of the most beautiful theorems of the 20th century.

Principles and Mechanisms

Imagine we need to describe a world fundamentally different from our own. A world where particles, like electrons, refuse to occupy the same state—a behavior governed by the famous Pauli exclusion principle. To build a mathematical language for such a world, we can't use ordinary numbers. If two particles are described by variables xxx and yyy, swapping them should introduce a minus sign: the state should be antisymmetric. But for ordinary numbers, x⋅yx \cdot yx⋅y is the same as y⋅xy \cdot xy⋅x. We need something new, something that naturally encodes this "antisocial" behavior. This is the door to the world of Grassmann numbers and the beautifully strange calculus that governs them.

An Algebra of "Nothing Squared"

Let's start with the building blocks of this new world: ​​Grassmann variables​​, often denoted by Greek letters like θ\thetaθ or ψ\psiψ. Unlike the numbers you're used to, which commute (xy=yxxy = yxxy=yx), Grassmann variables ​​anticommute​​. For any two such variables, θ1\theta_1θ1​ and θ2\theta_2θ2​, the fundamental rule is:

θ1θ2=−θ2θ1\theta_1 \theta_2 = - \theta_2 \theta_1θ1​θ2​=−θ2​θ1​

This simple rule has a startling and profound consequence. What happens if you take a single Grassmann variable, θ\thetaθ, and multiply it by itself? Following the rule, we'd have θθ=−θθ\theta \theta = - \theta \thetaθθ=−θθ. The only way a quantity can be equal to its own negative is if it is zero. So, for any Grassmann variable:

θ2=0\theta^2 = 0θ2=0

This isn't an approximation or a special case; it's a fundamental property. The square of any Grassmann variable is identically zero. This property, called ​​nilpotency​​, dramatically simplifies the algebra. Consider a function that we normally express as an infinite Taylor series, like the exponential. For a regular number xxx, we have ecx=1+cx+(cx)22!+(cx)33!+…e^{cx} = 1 + cx + \frac{(cx)^2}{2!} + \frac{(cx)^3}{3!} + \dotsecx=1+cx+2!(cx)2​+3!(cx)3​+…. But for a Grassmann variable, this series stops dead in its tracks.

exp⁡(cθ)=1+cθ+c2θ22+⋯=1+cθ\exp(c\theta) = 1 + c\theta + \frac{c^2\theta^2}{2} + \dots = 1 + c\thetaexp(cθ)=1+cθ+2c2θ2​+⋯=1+cθ

All terms from θ2\theta^2θ2 onwards simply vanish! This is a recurring theme: functions of Grassmann variables are not intimidating infinite series, but finite, manageable polynomials. Even an expression involving two variables like exp⁡(aθ1θ2)\exp(a \theta_1 \theta_2)exp(aθ1​θ2​) truncates instantly, since (θ1θ2)2=θ1θ2θ1θ2=−θ1θ1θ2θ2=0(\theta_1 \theta_2)^2 = \theta_1 \theta_2 \theta_1 \theta_2 = -\theta_1 \theta_1 \theta_2 \theta_2 = 0(θ1​θ2​)2=θ1​θ2​θ1​θ2​=−θ1​θ1​θ2​θ2​=0. Thus, exp⁡(aθ1θ2)=1+aθ1θ2\exp(a \theta_1 \theta_2) = 1 + a \theta_1 \theta_2exp(aθ1​θ2​)=1+aθ1​θ2​. This is the strange but simple world we are about to explore.

Integration as Selection

Now, how do we do calculus in this world? What does "integration" even mean for variables that don't trace out a continuous line? The ​​Berezin integral​​, named after Felix Berezin, redefines the concept. It is not about finding the "area under a curve." Instead, it's a formal procedure, a rule for selection. For a single Grassmann variable θ\thetaθ, the rules are as simple as they are strange:

∫dθ 1=0,and∫dθ θ=1\int d\theta \, 1 = 0, \quad \text{and} \quad \int d\theta \, \theta = 1∫dθ1=0,and∫dθθ=1

The integral of a constant is zero, and the integral "selects" the linear part of the function, returning its coefficient. It's more like a projection operator in linear algebra than the integration you learned in calculus.

When we have multiple variables, say θ1\theta_1θ1​ and θ2\theta_2θ2​, we integrate them one by one. The differentials themselves, dθ1d\theta_1dθ1​ and dθ2d\theta_2dθ2​, are also defined to be anticommuting objects. The most important rule for multiple variables is that an integral is non-zero only if the integrand contains every single variable of integration exactly once. Think of it as a key needing to have the right number of groves to fit the lock. If our lock is ∫dθ1dθ2\int d\theta_1 d\theta_2∫dθ1​dθ2​, the "key" must contain both θ1\theta_1θ1​ and θ2\theta_2θ2​.

For instance, if we try to compute ∫dθ1dθ2dθ3 θ1θ2\int d\theta_1 d\theta_2 d\theta_3 \, \theta_1 \theta_2∫dθ1​dθ2​dθ3​θ1​θ2​, we're missing a θ3\theta_3θ3​. The integral over dθ3d\theta_3dθ3​ finds only terms it can't "grab," and so it gives zero. The entire integral vanishes. Similarly, ∫dθ1dθ2 θ1(1+θ2)\int d\theta_1 d\theta_2 \, \theta_1(1+\theta_2)∫dθ1​dθ2​θ1​(1+θ2​) simplifies to ∫dθ1dθ2 θ1θ2\int d\theta_1 d\theta_2 \, \theta_1\theta_2∫dθ1​dθ2​θ1​θ2​ because the term θ1\theta_1θ1​ on its own doesn't have θ2\theta_2θ2​ and gets eliminated by the dθ2d\theta_2dθ2​ integration.

By convention, the integral over all variables of the highest-order term is normalized to one:

∫dθ1dθ2…dθN θN…θ2θ1=1\int d\theta_1 d\theta_2 \dots d\theta_N \, \theta_N \dots \theta_2 \theta_1 = 1∫dθ1​dθ2​…dθN​θN​…θ2​θ1​=1

The specific ordering matters! If we integrate ∫dθ1dθ2…dθN\int d\theta_1 d\theta_2 \dots d\theta_N∫dθ1​dθ2​…dθN​ over the product θ1θ2…θN\theta_1 \theta_2 \dots \theta_Nθ1​θ2​…θN​, the result is (−1)N(N−1)/2(-1)^{N(N-1)/2}(−1)N(N−1)/2, which is the sign you get from the number of swaps needed to reverse the order of the variables. This sensitivity to order is a direct echo of the anticommuting nature of the variables themselves.

The Great Connection: Determinants from Thin Air

So far, this might seem like a peculiar set of formal rules. But now we arrive at a truly beautiful connection, where this abstract algebra unexpectedly solves a very concrete problem. Let’s consider a general Gaussian integral, of the form we see constantly in probability and quantum mechanics, but now with Grassmann variables.

Let's take two sets of variables, θ1,θ2\theta_1, \theta_2θ1​,θ2​ and ϕ1,ϕ2\phi_1, \phi_2ϕ1​,ϕ2​, and a 2×22 \times 22×2 matrix of ordinary numbers, AAA. We want to compute:

I=∫dθ1dθ2dϕ1dϕ2 exp⁡(∑i,j=12θiAijϕj)I = \int d\theta_1 d\theta_2 d\phi_1 d\phi_2 \, \exp\left( \sum_{i,j=1}^2 \theta_i A_{ij} \phi_j \right)I=∫dθ1​dθ2​dϕ1​dϕ2​exp(i,j=1∑2​θi​Aij​ϕj​)

This looks fearsome, but we know the exponential is just a short polynomial. Let S=∑θiAijϕjS = \sum \theta_i A_{ij} \phi_jS=∑θi​Aij​ϕj​. The integral will be the coefficient of the top-form term, θ1θ2ϕ1ϕ2\theta_1 \theta_2 \phi_1 \phi_2θ1​θ2​ϕ1​ϕ2​, in the expansion of exp⁡(S)=1+S+12S2+…\exp(S) = 1 + S + \frac{1}{2}S^2 + \dotsexp(S)=1+S+21​S2+…. The 111 and SSS terms don't have enough variables. The term we need must come from S2S^2S2.

Let's see what happens when we calculate S2=(θ1A11ϕ1+θ1A12ϕ2+θ2A21ϕ1+θ2A22ϕ2)2S^2 = (\theta_1 A_{11} \phi_1 + \theta_1 A_{12} \phi_2 + \theta_2 A_{21} \phi_1 + \theta_2 A_{22} \phi_2)^2S2=(θ1​A11​ϕ1​+θ1​A12​ϕ2​+θ2​A21​ϕ1​+θ2​A22​ϕ2​)2. Most of the cross-products will be zero because they will contain a θi2\theta_i^2θi2​ or a ϕj2\phi_j^2ϕj2​. For instance, (θ1A11ϕ1)(θ1A12ϕ2)(\theta_1 A_{11} \phi_1)(\theta_1 A_{12} \phi_2)(θ1​A11​ϕ1​)(θ1​A12​ϕ2​) has a θ12\theta_1^2θ12​ and vanishes. The only terms that survive are those where each θi\theta_iθi​ and ϕj\phi_jϕj​ appears exactly once. The only surviving pair of terms in the expansion of S2S^2S2 comes from (θ1A11ϕ1)(θ2A22ϕ2)(\theta_1 A_{11} \phi_1)(\theta_2 A_{22} \phi_2)(θ1​A11​ϕ1​)(θ2​A22​ϕ2​) and (θ1A12ϕ2)(θ2A21ϕ1)(\theta_1 A_{12} \phi_2)(\theta_2 A_{21} \phi_1)(θ1​A12​ϕ2​)(θ2​A21​ϕ1​).

Working through the anticommutations, we find:

(θ1A11ϕ1)(θ2A22ϕ2)  ⟹  −(A11A22)θ1θ2ϕ1ϕ2(\theta_1 A_{11} \phi_1)(\theta_2 A_{22} \phi_2) \implies -(A_{11} A_{22}) \theta_1 \theta_2 \phi_1 \phi_2(θ1​A11​ϕ1​)(θ2​A22​ϕ2​)⟹−(A11​A22​)θ1​θ2​ϕ1​ϕ2​
(θ1A12ϕ2)(θ2A21ϕ1)  ⟹  (A12A21)θ1θ2ϕ1ϕ2(\theta_1 A_{12} \phi_2)(\theta_2 A_{21} \phi_1) \implies (A_{12} A_{21}) \theta_1 \theta_2 \phi_1 \phi_2(θ1​A12​ϕ2​)(θ2​A21​ϕ1​)⟹(A12​A21​)θ1​θ2​ϕ1​ϕ2​

The expansion of 12S2\frac{1}{2} S^221​S2 includes these terms twice (from swapping their order), so the factor of 12\frac{1}{2}21​ cancels. The coefficient of the top form θ1θ2ϕ1ϕ2\theta_1 \theta_2 \phi_1 \phi_2θ1​θ2​ϕ1​ϕ2​ is therefore A12A21−A11A22A_{12}A_{21} - A_{11}A_{22}A12​A21​−A11​A22​.

This is the negative of the determinant of the matrix AAA! This is a spectacular result. The esoteric rules of Berezin integration have conspired to compute one of the most fundamental objects in linear algebra. In general, for an N×NN \times NN×N matrix MMM:

∫dψˉ1…dψˉNdψ1…dψN exp⁡(−∑i,jψˉiMijψj)=det⁡(M)\int d\bar{\psi}_1 \dots d\bar{\psi}_N d\psi_1 \dots d\psi_N \, \exp\left(-\sum_{i,j} \bar{\psi}_i M_{ij} \psi_j\right) = \det(M)∫dψˉ​1​…dψˉ​N​dψ1​…dψN​exp(−i,j∑​ψˉ​i​Mij​ψj​)=det(M)

This formula is a cornerstone of modern theoretical physics, allowing physicists to represent the complicated dynamics of many-fermion systems as an integral over a determinant. This is also hinted at in simpler problems, where a similar calculation for a product of linear functions yields a determinant-like structure.

A Backwards Change of Pace: The Berezinian Jacobian

In ordinary calculus, when we change variables, say from xxx to uuu via x=aux=aux=au, the integration measure changes as dx=a dudx = a \, dudx=adu. The "volume element" stretches by the factor aaa. The factor that relates the old and new integration measures is called the ​​Jacobian​​.

What happens if we try this with Grassmann variables? Let's define a new variable η=aθ\eta = a\thetaη=aθ, where aaa is just an ordinary number. We want to find the Jacobian JJJ such that dη=Jdθd\eta = J d\thetadη=Jdθ. We can find it by demanding that the fundamental integral remains consistent. We know that ∫dη η\int d\eta \, \eta∫dηη must equal 1. Let's write this in terms of θ\thetaθ:

1=∫dη η=∫(Jdθ)(aθ)=Ja∫dθ θ1 = \int d\eta \, \eta = \int (J d\theta) (a\theta) = J a \int d\theta \, \theta1=∫dηη=∫(Jdθ)(aθ)=Ja∫dθθ

Since we defined ∫dθ θ=1\int d\theta \, \theta = 1∫dθθ=1, we get 1=Ja1 = J a1=Ja. This implies that the Jacobian is J=1aJ = \frac{1}{a}J=a1​. This is completely backward from what we are used to! For a set of transformations ηi=∑jMijθj\eta_i = \sum_j M_{ij} \theta_jηi​=∑j​Mij​θj​, the Jacobian is not det⁡(M)\det(M)det(M), but (det⁡(M))−1(\det(M))^{-1}(det(M))−1. This inverse relationship is a hallmark of Berezin integration and is essential for maintaining consistency when changing coordinates in this anticommuting world.

A Physicist's Power Tool

Why go to all this trouble? Because this machinery is an incredibly powerful tool for calculation, especially in quantum field theory. The Gaussian integral formula is not just for calculating determinants; it's a "generating functional." By adding source terms or "inserting" other variables inside the integral, we can calculate all sorts of physical quantities.

For example, computing an integral like

I=∫d2ψˉd2ψ (ψˉ1ψ2)exp⁡(−ψˉMψ)I = \int d^2\bar\psi d^2\psi \, (\bar\psi_1 \psi_2) \exp(-\bar\psi M \psi)I=∫d2ψˉ​d2ψ(ψˉ​1​ψ2​)exp(−ψˉ​Mψ)

might seem like an ordeal. But using the generating functional formalism, one can show this integral neatly evaluates to an element of the inverse matrix, M−1M^{-1}M−1. Specifically, the answer is (det⁡M)×(M−1)21(\det M) \times (M^{-1})_{21}(detM)×(M−1)21​, which for a 2×22 \times 22×2 matrix simplifies to −M21-M_{21}−M21​. These integrals are used to compute "propagators" or "Green's functions," which describe how a particle travels from one point to another. The fact that this strange integral over anticommuting numbers can directly calculate such a physically crucial quantity is a testament to its power and deep connection to the structure of our universe.

From a simple anticommutation rule, a whole new world of mathematics unfolds—a world where squares are zero, functions are finite polynomials, and integrals are selectors that magically produce determinants. It is a perfect example of how inventing a new mathematical language, no matter how strange it seems, can give us a profound new way to describe and understand reality.

Applications and Interdisciplinary Connections

Now that we have acquainted ourselves with the curious rules of Berezin integration, it is time for the real fun to begin. You might be tempted to think of these anticommuting variables as a mere mathematical curiosity, a strange game with peculiar rules. But nature, it turns out, plays this game. This abstract algebra is not just a contrivance; it is a key that unlocks a breathtaking landscape of deep and beautiful connections across physics and mathematics. Let us embark on a journey to see what this strange calculus is good for.

A Surprising New Look at an Old Friend: The Determinant

Our first stop is in the familiar territory of linear algebra, but we are about to see it in a completely new light. Consider the determinant of a matrix, a number we all learn to calculate in school. It tells us how a linear transformation scales volumes. We have rules for computing it, like expanding by minors, which can become quite cumbersome for large matrices. What if I told you there is another way—a way that seems to come from another world? We can represent the determinant of an N×NN \times NN×N matrix AAA as an integral over Grassmann variables:

det⁡(A)=∫dθˉ1dθ1…dθˉNdθN exp⁡(−∑i,j=1NθˉiAijθj)\det(A) = \int d\bar{\theta}_1 d\theta_1 \dots d\bar{\theta}_N d\theta_N \, \exp\left( -\sum_{i,j=1}^N \bar{\theta}_i A_{ij} \theta_j \right)det(A)=∫dθˉ1​dθ1​…dθˉN​dθN​exp(−i,j=1∑N​θˉi​Aij​θj​)

This formula is nothing short of magic. An integral, typically associated with summing up continuous values, perfectly repackages the discrete, combinatorial nature of the determinant. Why does it work? The secret lies in the very nature of Berezin integration. To get a non-zero result, the integrand must contain each and every integration variable exactly once. When we expand the exponential, the only term that survives the integration is the one proportional to θˉ1θ1θˉ2θ2…θˉNθN\bar{\theta}_1 \theta_1 \bar{\theta}_2 \theta_2 \dots \bar{\theta}_N \theta_Nθˉ1​θ1​θˉ2​θ2​…θˉN​θN​. The anticommuting nature of the θ\thetaθ variables automatically takes care of the alternating signs (the signature of the permutation in the Leibniz formula for the determinant) that are so crucial. It’s a beautiful conspiracy where the algebra does all the hard booking for us. This remarkable formula is not just a theoretical novelty; it can be used to explicitly compute determinants for any matrix, be it the simple identity matrix or a more general numerical matrix.

The story does not end there. For a special class of matrices, the antisymmetric ones, there exists a related quantity called the Pfaffian, which is, in a sense, the "square root" of the determinant. The Berezin integral provides an even more natural representation for the Pfaffian, linking it to an integral over a single set of real Grassmann variables. This connection is so profound that we can use the integral representation to explore properties of the Pfaffian, such as how it changes when we vary the elements of the matrix. This elegant formalism is a powerful tool, but its true significance is revealed when we see what these variables were born to describe: the fundamental particles of matter.

The Language of Quantum Fields: Fermions and Propagators

The Pauli exclusion principle states that no two identical fermions—particles like electrons, protons, and neutrons—can occupy the same quantum state. If we represent the creation of a fermion in a state iii by a variable ψi†\psi_i^\daggerψi†​, and in a state jjj by ψj†\psi_j^\daggerψj†​, then creating one and then the other in a different order should give the opposite quantum state: ψi†ψj†=−ψj†ψi†\psi_i^\dagger \psi_j^\dagger = -\psi_j^\dagger \psi_i^\daggerψi†​ψj†​=−ψj†​ψi†​. And trying to create two fermions in the same state gives nothing: (ψi†)2=0(\psi_i^\dagger)^2 = 0(ψi†​)2=0. This is precisely the algebra of Grassmann variables!

Berezin integration is therefore the natural language of quantum field theory (QFT) for fermions. In the path integral formulation of QFT, we sum over all possible "histories" of a system. For fermions, this "sum" is a Berezin integral. The "partition function," ZZZ, which encodes all the statistical and thermodynamic information of a quantum system, can be calculated as such an integral. For example, in a simple model of Majorana fermions—particles that are their own antiparticles—on a two-site lattice, the partition function can be computed directly. The result elegantly depends on physical parameters like the chemical potential μ\muμ, a hopping amplitude ttt, and a superconducting pairing term Δ\DeltaΔ, demonstrating a tangible link between this abstract mathematics and measurable condensed matter physics.

Furthermore, QFT is not just about static properties; it's about interactions and dynamics. A central question is: if we create a particle at one point in spacetime, what is the probability amplitude to find it at another? This quantity is called the propagator, or a two-point correlation function. The machinery of Berezin integration provides a master key for calculating these. By introducing "source" terms into our path integral, we define a generating functional. Taking derivatives with respect to these sources magically spits out the correlation functions we desire. In a simple model, this procedure reveals one of the deepest truths of QFT: the propagator is nothing but the inverse of the matrix that defines the system's action.

Taming the Random: Supersymmetry and Averages

So far, we have discussed well-defined, ordered systems. But what about the messy, complex, and chaotic parts of the world? Consider the nucleus of a heavy atom, with its hundreds of interacting protons and neutrons, or an electron moving through a material riddled with impurities. The exact energy levels of such systems are impossibly complex. Instead of predicting them one by one, we ask statistical questions about their distribution. This is the domain of Random Matrix Theory (RMT).

A central challenge in RMT is to compute the average of quantities over an entire ensemble of random matrices. A particularly thorny problem is averaging the inverse of a matrix, or its determinant, which often appears in physical observables. Naively, averaging the inverse is not the same as taking the inverse of the average. Here, Berezin integration provides a spectacularly clever trick, often called the "supersymmetry method." The idea is to represent a troublesome denominator, like det⁡(A)\det(A)det(A), as a Gaussian integral over ordinary commuting ("bosonic") variables. The numerator, meanwhile, can be written as an integral over anticommuting ("fermionic") variables.

By combining them into a single "super-integral," one can perform the difficult average over the random matrix elements first. The Gaussian averaging process creates new couplings between the super-variables. Miraculously, the resulting integral, though it looks complicated, is often far more tractable than the original problem. This method allows for the exact calculation of quantities like the average resolvent of a matrix from the Gaussian Orthogonal Ensemble (GOE), a cornerstone of RMT. The same principle is the foundation of the so-called "nonlinear sigma model," a powerful theoretical framework for studying quantum chaos and transport in disordered electronic systems. The simplest illustration of this boson-fermion partnership is seen in mixed integrals where both types of variables are coupled together, yet the integral can be solved by handling each in turn.

Geometry and Topology: The Deepest Connections

This idea of combining commuting and anticommuting variables, which we have been calling a "trick," is in fact a window into a profound mathematical structure: supersymmetry. We can formalize this by imagining "super-spaces" that have both ordinary spatial dimensions and new, anticommuting dimensions. In these exotic geometries, many of the familiar theorems of calculus and geometry find beautiful generalizations. For instance, Stokes' theorem, which tells us that the integral of a derivative over a region is equal to the value of the function on the boundary, can be extended to supermanifolds. The unity of mathematics shines through: this fundamental principle of calculus persists even in a world woven with anticommuting threads.

Perhaps the most stunning application, the crown jewel of this entire enterprise, lies at the intersection of geometry, topology, and physics: the Atiyah-Singer Index Theorem. This theorem, one of the greatest intellectual achievements of the 20th century, establishes a deep link between two vastly different worlds. On one side, we have analysis: the index of a differential operator (like the Dirac operator), which counts the difference between the number of its zero-energy solutions of different types. This is a subtle property depending on the local details of the operator. On the other side, we have topology and geometry: quantities that describe the global "shape" and "curvature" of the space on which the operator is defined.

The theorem states that these two numbers, one from analysis and one from geometry, are miraculously equal. A physicist, using the path integral, can offer a wonderfully intuitive "proof" of this theorem. By representing the index as a supersymmetric path integral, one can show that the result localizes to an integral over the constant field configurations. This calculation, involving a Berezin integral over fermionic zero modes, reveals that the index is precisely equal to the total integrated curvature—like the total magnetic flux—over the entire manifold. The quantum fluctuations conspire in such a way that only the global, topological information survives.

From a trick for calculating determinants to a tool for proving one of the deepest theorems in modern mathematics, Berezin integration has taken us on a remarkable intellectual expedition. It shows us that an idea that at first seems strange and abstract can turn out to be the perfect language to describe the physical world, revealing the hidden unity and profound beauty that underlie the structure of our universe.