try ai
Popular Science
Edit
Share
Feedback
  • Computational Fluid Dynamics (CFD): Principles, Methods, and Applications

Computational Fluid Dynamics (CFD): Principles, Methods, and Applications

SciencePediaSciencePedia
Key Takeaways
  • Computational Fluid Dynamics (CFD) translates the continuous Navier-Stokes equations into discrete problems that computers can solve, often using the robust, conservation-based Finite Volume Method.
  • Successfully simulating fluid flow requires overcoming significant numerical challenges like pressure-velocity decoupling, stiffness, and non-physical oscillations near shocks by employing clever techniques such as projection methods, implicit time-stepping, and flux limiters.
  • Advanced techniques like the adjoint method transform CFD from a pure analysis tool into a powerful partner for automated design optimization in engineering.
  • The fundamental mathematical tools developed for CFD, such as enforcing physical constraints with elliptic equations, are so universal that they appear in fields as diverse as solid mechanics and the simulation of black holes in numerical relativity.

Introduction

The magnificent Navier-Stokes equations describe everything from the swirl of milk in coffee to the silent glide of a jet through the stratosphere. In principle, they hold the key to all fluid motion, but they possess a ferocious complexity that makes them impossible to solve by hand for almost any practical scenario. This gap between physical law and analytical solution is where the true adventure of computational science begins.

Computational Fluid Dynamics (CFD) is the art of translating these intractable equations into a language a computer can solve. This process is a subtle dialogue between physics, mathematics, and computer science, resulting in clever methods that respect the physical principles they seek to model. This article explores the core of these methods.

First, we will delve into the foundational ​​Principles and Mechanisms​​ of CFD. You will learn how continuous physical laws are discretized into solvable problems, how complex geometries are tamed, and what ingenious strategies are used to overcome numerical instabilities. Then, in the section on ​​Applications and Interdisciplinary Connections​​, we will see how these methods are not just analytical tools but creative partners in engineering design and how their core ideas echo in fields as far-flung as astrophysics and the simulation of spacetime itself.

Principles and Mechanisms

To simulate the majestic dance of a swirling galaxy, the chaotic tumble of a waterfall, or the silent passage of air over a wing, we must first learn the language of fluids. This language is mathematics, and its grammar is found in a set of profound physical principles: the conservation of mass, momentum, and energy. Computational Fluid Dynamics, or CFD, is the art of teaching a computer to speak this language. It's not about finding a single, elegant formula that describes everything, but about developing a robust strategy to painstakingly reconstruct the behavior of a fluid, piece by piece, moment by moment.

The Laws of Motion in a Fluid World

At the heart of fluid dynamics lie the celebrated ​​Navier-Stokes equations​​. They are the grand synthesis of Isaac Newton's laws of motion, adapted for a substance that flows and deforms. Think of a tiny parcel of fluid. Its motion is determined by a tug-of-war between forces: the push of pressure from its neighbors, the sticky drag of viscosity, and the pull of gravity. The Navier-Stokes equations write this drama down.

But there’s a twist, a term that gives fluids their captivating complexity and makes them notoriously difficult to predict. It's the ​​convective acceleration term​​, which often looks something like (V⃗⋅∇)V⃗(\vec{V} \cdot \nabla)\vec{V}(V⋅∇)V. This isn't an external force. Instead, it describes how the velocity of our fluid parcel changes simply because it has moved to a new spot in the flow where the velocity is different. Imagine you're in a raft on a river. Even if the river's flow isn't changing in time, you accelerate as you are swept from a slow, wide part into a fast, narrow channel. That's convective acceleration. This term is ​​non-linear​​, meaning it involves products of the velocity with itself. This non-linearity is the seed from which the intricate, multi-scale structures of turbulence grow, creating a cascade of energy from large eddies down to tiny swirls that is computationally immense to capture.

Remarkably, this term also dictates the very character of the governing equations. For flows slower than the speed of sound (​​subsonic​​), the equations are ​​elliptic​​. This means that a disturbance, like a pressure pulse, will spread out in all directions, much like the ripples from a pebble dropped in a still pond. The flow at any one point depends on the conditions everywhere else. But for flows faster than sound (​​supersonic​​), the equations become ​​hyperbolic​​. Now, disturbances are swept downstream faster than they can propagate upstream. Information is confined to a "cone of influence" behind the object. You don't hear a supersonic jet until after it has passed you; the sound is trapped in this cone. This change in mathematical character, from elliptic to hyperbolic, is a beautiful reflection of the underlying physics and requires fundamentally different numerical strategies to handle. In the supersonic regime, this can lead to the formation of abrupt changes in the flow properties, known as ​​shock waves​​, which demand specialized numerical techniques to be simulated accurately and stably.

A World of Finite Volumes

A computer cannot grasp the infinite continuity of a real fluid. It thinks in discrete numbers. The first step in CFD is therefore ​​discretization​​: we chop our continuous domain of space and time into a finite number of small pieces. One of the most intuitive and powerful ways to do this is the ​​Finite Volume Method (FVM)​​.

The philosophy of the FVM is to stop insisting that the conservation laws hold at every single infinitesimal point, which is an impossible demand for a computer. Instead, we enforce them on average over small, finite volumes or "cells". We draw a grid of these cells covering our entire domain. For each cell, we write a budget:

The rate of change of a quantity (like mass or momentum) inside the cell = The net amount of that quantity flowing in or out across the cell's faces + The amount of that quantity being created or destroyed by sources inside the cell.

This statement is the soul of the FVM. To make it work, we must write our governing equations in a special "conservation form". For example, the one-dimensional momentum equation can be arranged to look like ∂Q∂t+∂F∂x=S\frac{\partial Q}{\partial t} + \frac{\partial F}{\partial x} = S∂t∂Q​+∂x∂F​=S, where QQQ is the momentum per unit volume (ρu\rho uρu) and FFF is the ​​momentum flux​​. This flux term, F=ρu2+p−τxxF = \rho u^2 + p - \tau_{xx}F=ρu2+p−τxx​, represents all the ways momentum can be transported across a boundary: by the fluid carrying its own momentum across (ρu2\rho u^2ρu2), by pressure pushing on the boundary (ppp), and by viscous stresses pulling on it (−τxx-\tau_{xx}−τxx​). By focusing on the fluxes at the boundaries between cells, the FVM ensures that what flows out of one cell flows perfectly into its neighbor. Summed over the whole domain, all these internal fluxes cancel out, and we are left with a method that respects the global conservation laws to machine precision—a critical feature for physical realism.

Taming Complex Shapes

It's all well and good to imagine a grid of neat, rectangular cells. But what about the flow around a car, through a human artery, or over the intricate blades of a turbine? The geometry is complex. The solution is elegant: we perform a mathematical change of coordinates. We create a smooth mapping from our twisted, complicated physical domain to a simple, pristine computational domain, which is typically just a cube made of perfectly uniform grid cells.

We do all our numerical work in this simple computational space. However, the transformation leaves its mark on the equations. The chain rule of calculus tells us that derivatives in the physical space become more complex combinations of derivatives in the computational space, involving geometric factors called ​​metric terms​​. To maintain the beautiful conservation property of the FVM, we can't just transform the velocity components; we must transform the fluxes. The mathematically "correct" way to do this involves a concept from tensor calculus: the ​​contravariant flux​​. By formulating our fluxes in this way, we ensure that our discrete divergence in the simple computational space correctly represents the physical divergence in the complex physical space. This preserves the perfect cancellation of fluxes at cell interfaces, giving us a conservative scheme that also correctly handles a uniform flow without generating artificial forces—a property known as being ​​free-stream preserving​​.

Conversations Between Cells

The core of an FVM calculation is computing the flux across each face. Let's return to our budget analogy. To know how much momentum flows between cell iii and cell jjj, we need to know the velocity, pressure, and density right at the face they share. The problem is, we only store these values at the center of each cell. We must therefore ​​interpolate​​ the cell-center values to the face.

This seemingly simple step is fraught with subtlety. A naive interpolation can ruin the accuracy of the entire simulation. For instance, consider calculating the heat flux through a face, which depends on the temperature gradient at that face. We might estimate this gradient from the temperatures in the two adjacent cells, TPT_PTP​ and TNT_NTN​. But if the thermal conductivity kkk itself depends on temperature, we also need to know the temperature at the face to evaluate it, requiring an interpolation for that as well.

On the non-uniform grids used for complex geometries, the challenge is even greater. A simple linear average of the neighboring cell values might not be accurate. A more rigorous requirement is that the interpolation scheme should be ​​linearity-preserving​​. This means that if the true physical field happens to be a simple linear function (e.g., u(x)=αx+βu(x) = \alpha x + \betau(x)=αx+β), our interpolation must reproduce its exact value at the face. Satisfying this condition leads to more sophisticated interpolation weights that depend on the physical distances between the cell centers and the face, ensuring the scheme remains accurate even when the grid is stretched and distorted.

The Ghost in the Machine: Tackling Numerical Challenges

Building a reliable CFD solver is not just about discretizing the equations. It's about anticipating and outsmarting the various ways a numerical scheme can fail or produce unphysical results. These challenges have led to some of the most clever ideas in the field.

The Pressure Puzzle

For an incompressible fluid like water, there is no direct equation for pressure. Instead, we have a rigid constraint: the divergence of the velocity must be zero, ∇⋅u=0\nabla \cdot \mathbf{u} = 0∇⋅u=0. This means that the net volume flow out of any region must be zero. Pressure, it turns out, is the enforcer. It adjusts itself instantaneously throughout the fluid to ensure this constraint is always met.

So how do we find the pressure? ​​Projection methods​​, used in algorithms like SIMPLE and PISO, provide a brilliant strategy. It’s a two-step process. First, we solve the momentum equations to find a preliminary velocity, u∗\mathbf{u}^*u∗, ignoring the pressure constraint. This velocity field will not be divergence-free. The amount by which it fails, ∇⋅u∗\nabla \cdot \mathbf{u}^*∇⋅u∗, is a measure of the local sources and sinks of volume that shouldn't be there. In the second step, we calculate a pressure-like field, ϕ\phiϕ, by solving a ​​Pressure Poisson Equation (PPE)​​, which looks like ∇2ϕ=1Δt(∇⋅u∗)\nabla^2 \phi = \frac{1}{\Delta t} (\nabla \cdot \mathbf{u}^*)∇2ϕ=Δt1​(∇⋅u∗). The gradient of this pressure field is precisely what's needed to "project" our preliminary velocity onto a divergence-free state: un+1=u∗−Δt∇ϕ\mathbf{u}^{n+1} = \mathbf{u}^* - \Delta t \nabla \phiun+1=u∗−Δt∇ϕ. In essence, we let the velocity field first violate the law, then we compute the exact pressure "punishment" required to force it back into compliance. In practice, solving the full PPE can be expensive, so many algorithms use an approximate form and iterate a few times to drive the divergence towards zero.

Escaping the Checkerboard

Sometimes, a numerical scheme can have a blind spot. Consider a grid where we store pressure and velocity at the same location (a ​​collocated grid​​). If a bizarre, non-physical pressure field arises that looks like a checkerboard—high, low, high, low—our standard way of calculating the pressure gradient might not see it! When we compute the gradient at a cell center, we might look at its neighbors, see one high and one low, and average them out to a gradient of zero. The pressure field exerts no force, the divergence constraint is not enforced properly, and the velocity solution becomes polluted with wild oscillations. This phenomenon, known as ​​pressure-velocity decoupling​​, is a classic pitfall. It revealed that the specific choice of discretization and grid arrangement is critical. It motivated the invention of ​​staggered grids​​, where velocity components are stored on the faces and pressure at the centers, which elegantly sidesteps this problem, or the development of more sophisticated interpolation schemes for collocated grids.

The Physics of Positivity

In the world of high-speed, compressible gas dynamics, the physics demands that density ρ\rhoρ and pressure ppp must always be positive. A negative density is meaningless, and a negative absolute pressure is impossible. While our cell-averaged values might be positive, an aggressive high-order interpolation scheme, in its quest for accuracy, can overshoot and produce negative values at the interface between cells.

The consequences are not just inaccurate; they are catastrophic. The speed of sound, ccc, is given by c2=γp/ρc^2 = \gamma p / \rhoc2=γp/ρ. If pressure ppp becomes negative, c2c^2c2 becomes negative, and the sound speed becomes an imaginary number. This is a mathematical siren warning that the physics has broken down. The governing equations lose their hyperbolic character, meaning the tidy structure of information flow is destroyed. A numerical method that relies on wave speeds to compute fluxes, like a ​​Riemann solver​​, will be fed an imaginary wave speed. The code will likely crash, producing a NaN (Not-a-Number) as it tries to take the square root of a negative number. This dramatic failure is a powerful reminder that numerical algorithms must be designed with physical constraints baked into their DNA.

The Dance of Time and Stability

For unsteady flows, we must march forward in time. After discretizing in space, we are left with a massive system of ordinary differential equations (ODEs). A major challenge is ​​stiffness​​. A stiff system is one where different things are happening on vastly different timescales. For instance, pressure waves might zip across a grid cell in a nanosecond, while the slow, viscous diffusion of a dye might take seconds.

A simple, explicit time-stepping method (like "take the current state, calculate the rate of change, and take a small step forward") is ruled by the fastest process. To remain stable, its time step, Δt\Delta tΔt, must be smaller than the time it takes for the fastest signal to cross a grid cell. For a stiff problem, this can lead to an absurdly large number of tiny time steps, making the simulation prohibitively expensive.

To overcome this, we use ​​implicit methods​​. These methods calculate the state at the next time step based on the rates of change at that same future time. This requires solving a large system of coupled equations at each step, but it buys us incredible stability. Methods that are ​​A-stable​​ can take large time steps for stiff problems without the solution blowing up. Even better are ​​L-stable​​ methods. Not only are they stable, but they also strongly damp the very-high-frequency components of the solution—the ones corresponding to those lightning-fast but often uninteresting physical processes. This allows us to take large time steps and focus our computational effort on the evolution of the slower, more interesting features of the flow.

From the fundamental laws of physics to the practical art of taming numerical instabilities, CFD is a field built upon layers of deep principles and ingenious mechanisms. It is a testament to our ability to translate the continuous world of nature into the discrete language of the machine, allowing us to explore the unseen dynamics that shape our universe.

The Art of the Possible: Forging Reality from Equations

We have spent some time with the magnificent equations of fluid motion, the Navier-Stokes equations. In principle, they tell us everything there is to know about the swirl of milk in your coffee, the whistle of the wind, and the silent glide of a jet through the stratosphere. But there is a catch, a rather significant one: for nearly any situation of practical interest, these equations are simply too ferocious to be solved by hand. They contain a mischievous nonlinear term that couples all scales of motion, from the vast sweep of a hurricane down to the tiniest eddy, creating a tapestry of complexity that defies direct analytical solution.

This is where the true adventure begins. Computational Fluid Dynamics, or CFD, is the art and science of translating these intractable equations into a language a computer can understand and solve. But do not be mistaken; this is no mere act of blind translation. It is a deep and subtle dialogue between physics, mathematics, and computer science. A well-crafted CFD method is not just a crude approximation; it is a thing of beauty in itself, a clever construct designed to respect and even embody the physical principles it seeks to model.

In this chapter, we will journey beyond the principles and into the world of applications. We will see how the clever ideas we have discussed empower us not only to analyze the world but to actively design and shape it. We will see how the challenges of modeling fluids have forged a set of intellectual tools so powerful and so universal that they echo in fields as far-flung as astrophysics and the simulation of spacetime itself.

Engineering the Modern World: From Flight to Data

Let's begin with a deceptively simple question: how do we simulate the flow of air over an airplane wing? The wing is a complex, three-dimensional shape, with curves and corners. Our numerical methods, whether they are based on finite volumes, finite elements, or other schemes, must first chop up the space around the wing into a "mesh" of small cells. Here we hit our first practical problem. In an ideal world, this mesh would be a beautiful, orderly grid of perfect cubes. In reality, to conform to the wing's shape, many of these cells will be warped, skewed, and stretched.

Does this matter? Absolutely! A core operation in CFD is calculating gradients—how much the pressure or velocity changes from one point to another. On a skewed mesh, simple methods for calculating these gradients can become surprisingly inaccurate, poisoning the entire solution. This is a microcosm of the entire engineering challenge of CFD: the real world is messy. The genius of modern CFD is the development of robust schemes, like the least-squares gradient reconstruction, which are clever enough to maintain high accuracy even on the imperfect meshes that complex geometry forces upon us.

Of course, some methods are fundamentally ill-suited for complex shapes. Spectral methods, which represent the flow as a sum of smooth, global functions (like sines and cosines), can achieve astonishing accuracy for flows in simple domains like boxes or pipes. But try to use a single set of smooth functions to describe the flow around an object with a sharp corner, and you run into a disaster known as the Gibbs phenomenon. The smooth functions struggle violently to capture the sharp change, producing wild oscillations and errors. This reveals a fundamental trade-off in CFD: the tension between geometric flexibility and raw numerical power. Much of the field's progress has come from inventing methods that give us the best of both worlds.

But what if we want to do more than just analyze a given wing? What if we want the computer to design a better one for us? Suppose we want to find the exact shape that minimizes drag. This is a problem of optimization. The "design space"—the collection of all possible wing shapes—is enormous. If we have, say, a thousand variables defining the shape, we would naively have to run a thousand CFD simulations just to figure out how to nudge the shape in the right direction for the next design iteration. This would be computationally impossible.

This is where one of the most elegant ideas in modern computational science comes to the rescue: the adjoint method. The adjoint method is a mathematical masterstroke that allows us to calculate the gradient of our objective function (like drag) with respect to all one thousand design variables at once, for the cost of just one extra CFD simulation. It is not an exaggeration to call this a kind of magic. It makes large-scale optimization feasible, transforming CFD from a mere analysis tool into a powerful creative partner in the engineering design process.

Finally, we must remember that every large CFD simulation is a veritable firehose of data. We might simulate the flow around a cylinder for millions of tiny time steps, but we cannot possibly save the results of every single one. So, a new question arises: how often do we need to save the data to capture the physics we care about? Consider the beautiful phenomenon of vortex shedding, where a cylinder in a flow sheds swirling vortices in a regular, periodic pattern. This shedding happens at a specific frequency, which we can predict using a dimensionless quantity called the Strouhal number. If we save our data at a sampling frequency that is too low—lower than twice the shedding frequency—we will fall prey to a phenomenon from signal processing called aliasing. Our data will show a phantom, low-frequency oscillation that isn't really there, completely misrepresenting the physics. This is a beautiful intersection of fluid dynamics and information theory, reminding us that a computational scientist must be as adept at data analysis as they are at solving differential equations.

Taming the Chaos: Simulating Shocks and Turbulence

Fluid dynamics is famous for its difficult children: shock waves and turbulence. These phenomena represent the Navier-Stokes equations at their most formidable, and they demand the utmost ingenuity from our numerical methods.

A shock wave, like the one that forms in front of a supersonic jet, is a region where fluid properties like pressure and density change almost instantaneously across an incredibly thin layer. As we saw with the Gibbs phenomenon, this is a nightmare scenario for high-order numerical methods, which are designed for smooth solutions. They tend to produce large, unphysical oscillations around the shock that can corrupt the entire simulation. A simpler, low-order method might not oscillate, but it would smear the shock out over many grid cells, losing all the fine detail.

So, what do we do? We get clever. Modern shock-capturing schemes employ a hybrid strategy that is, in a sense, a form of artificial intelligence. The algorithm uses a "troubled-cell indicator" to scan the flow field and "detect" where a shock is likely to be. In the vast regions of smooth flow, it uses a highly accurate, high-order method. But in any cell flagged as "troubled," it preemptively switches to a robust, low-order method (a "limiter") that can handle the shock without oscillating. The true cleverness lies in the design of the indicator. A simple indicator might get confused between a shock wave and a swirling vortex. Both involve strong gradients. But a sophisticated indicator knows its physics: a shock is characterized by strong compression (a large negative divergence of velocity, ∇⋅u0\nabla \cdot \mathbf{u} 0∇⋅u0), while a vortex is characterized by strong rotation (a large curl of velocity, ∇×u\nabla \times \mathbf{u}∇×u) but very little compression. By designing a sensor that can tell the difference, the algorithm can selectively and intelligently apply its brute-force stabilization only where it is truly needed.

Then there is turbulence, famously described as the last great unsolved problem of classical physics. Direct simulation of every eddy in a turbulent flow is beyond the reach of even the most powerful supercomputers. A key challenge is that the nonlinear term in the Navier-Stokes equations, u⋅∇u\mathbf{u} \cdot \nabla \mathbf{u}u⋅∇u, can cause a cascade of energy from large scales to small scales. In a numerical simulation, if this energy cascades down to a scale smaller than our grid can resolve, it can "alias" and fold back into the resolved scales as spurious, unphysical energy, eventually causing the simulation to become unstable and blow up.

To combat this, mathematicians have devised schemes that are "discretely energy-conserving." They rewrite the nonlinear term in a special "split form" that, when discretized, exactly mimics the energy conservation properties of the original continuous equations. This ensures that the numerical scheme, by its very structure, cannot create energy out of thin air. This is a profound idea: we are not just approximating the equations; we are building the fundamental conservation laws of physics directly into the DNA of our algorithm.

The Universal Toolkit: Echoes of CFD Across Science

The powerful ideas forged in the crucible of fluid dynamics are not confined to that field alone. They are part of a universal toolkit for computational science, and their echoes can be found in the most surprising of places.

Consider the field of computational solid mechanics, which simulates the deformation and stress in materials. If we are modeling a nonlinear "hyperelastic" material, the energy stored in the material might depend on the cube of the strain, ε3\varepsilon^3ε3. When we calculate the total energy in a finite element, we must integrate this term. But if our strain field ε\varepsilonε is represented by a polynomial of degree ppp, the term we must integrate is a polynomial of degree 3p3p3p. If our numerical integration rule (the quadrature) is not accurate enough for this much higher degree, we will suffer from aliasing errors, just as in the turbulence problem. The solution is the same in principle: "overintegration," or using a more accurate integration rule than would be necessary for a simple linear problem. This shows how scientists in different disciplines, grappling with nonlinearity in different physical contexts, independently discovered the same fundamental numerical principle.

This universality extends to the very act of computation itself. A large-scale CFD simulation might run on a supercomputer with tens of thousands of processors. Each processor handles a small chunk of the domain, and at every time step, it must communicate with its neighbors to exchange information about the flow at their boundaries (the "halo" data). This communication takes time, and while a processor is waiting for data, it is sitting idle. The key to high-performance computing is to overlap this communication with useful computation. A processor can start working on the "interior" cells of its domain, which don't depend on the halo data, while the communication happens in the background. The challenge is to find the optimal balance—to pipeline the work and communication in a way that minimizes idle time without introducing too much software overhead or violating numerical dependencies. This problem of optimizing the "compute-communicate" cycle is not unique to CFD; it is a central theme in all large-scale parallel computing, from climate modeling to astrophysics.

Perhaps the most breathtaking connection of all is between the humble world of incompressible fluid flow and the mind-bending realm of numerical relativity—the simulation of colliding black holes and gravitational waves using Einstein's theory of general relativity. In incompressible CFD, a key constraint is that the velocity field must be divergence-free: ∇⋅u=0\nabla \cdot \mathbf{u} = 0∇⋅u=0. This constraint isn't a dynamic evolution equation; it's a condition that must be satisfied at every instant. Numerically, this is often enforced by a "pressure projection" method. We find a pressure field ppp by solving a global, elliptic Poisson equation (∇2p=…\nabla^2 p = \dots∇2p=…) that ensures the resulting velocity field is divergence-free. The pressure acts as a kind of Lagrange multiplier to enforce the constraint.

Now, let's turn to general relativity. When simulating the evolution of spacetime, one must choose a coordinate system, a process called "gauge choice" or "slicing." A poor choice can lead to singularities or instabilities that crash the simulation. One very successful and stable choice is "maximal slicing," which enforces the constraint that the trace of the extrinsic curvature, KKK, is zero on each slice of time. This quantity KKK measures the local expansion or contraction of space, making it a geometric analogue of the velocity divergence ∇⋅u\nabla \cdot \mathbf{u}∇⋅u. And how is this condition K=0K=0K=0 enforced? By solving a global, elliptic equation for a variable called the "lapse function" α\alphaα, which controls the flow of time from one slice to the next.

The analogy is staggering. The pressure that enforces incompressibility in a water pipe and the lapse function that provides a stable slicing of spacetime near a black hole are governed by the same type of mathematical structure: a global elliptic equation that acts to enforce a constraint. The intellectual toolkit developed for simulating earthly flows contains the very same ideas needed to simulate the fabric of the cosmos.

It is a beautiful testament to the power and unity of mathematical physics. The journey of understanding, which may begin with a simple desire to build a better airplane, can lead us to the very edge of our understanding of space and time. This is the art of the possible, where the abstract beauty of equations is forged, through computational ingenuity, into a new and profound way of seeing the universe.