try ai
Popular Science
Edit
Share
Feedback
  • Scientific Computing: Principles and Applications

Scientific Computing: Principles and Applications

SciencePediaSciencePedia
Key Takeaways
  • Computer arithmetic's inherent limitations, like rounding errors and non-associativity, demand careful algorithm design and an awareness of numerical stability.
  • Effective computational methods approximate continuous problems with finite steps while providing precise control over the resulting error, enabling smart, adaptive algorithms.
  • The efficiency of large-scale parallel computations is constrained not just by serial code (Amdahl's Law) but also by the significant cost of inter-processor communication.
  • Scientific computing acts as a digital laboratory, enabling simulations across disciplines from engineering to quantum physics, but requires rigorous verification of the entire process.

Introduction

In the modern era, scientific inquiry is no longer confined to the laboratory bench or the theorist's blackboard. A third pillar has emerged, one of immense power and complexity: scientific computing. It allows us to simulate the birth of galaxies, design life-saving drugs, and engineer materials that have not yet been created. However, to wield this powerful tool effectively is to move beyond simply running software and to understand its inner workings—its fundamental rules, surprising pitfalls, and profound capabilities. This article addresses the gap between using computational tools and truly mastering them, offering a journey into the heart of digital discovery.

Across the following sections, we will first explore the core ​​Principles and Mechanisms​​ that govern the computational world. We will uncover the treacherous nature of computer arithmetic, learn how infinity is tamed by finite steps, and understand the art of choosing stable and efficient algorithms for single and parallel processors. Following this, we will turn to the diverse ​​Applications and Interdisciplinary Connections​​, where we will see these principles in action, transforming fields from engineering and materials science to pharmacology and astrophysics. This journey begins not with code, but with the very concepts that make scientific computing both a rigorous science and a creative art.

Principles and Mechanisms

Imagine you are on a journey into the heart of modern science. The landscape is not one of test tubes and lab coats, but of pure thought, rendered into algorithms and executed on machines of unimaginable speed. This is the world of scientific computing. Like any journey into a new world, we must first learn its fundamental laws. Some are intuitive, some are strange, and some are profoundly beautiful in their subtlety. They are not just rules for programmers; they are deep principles about knowledge, error, and the very nature of discovery in the digital age.

The Treacherous World of Numbers

Our journey begins with the most basic concept of all: a number. In the pristine world of mathematics, numbers are perfect, infinitely precise beings. The number 111 is exactly one, and π\piπ has an endless, majestic trail of digits. But in a computer, everything must be stored in a finite number of bits. This simple, practical constraint gives birth to a whole new kind of arithmetic, a world with its own peculiar rules.

Let's play a simple game. What is 100,000,000+1+1−100,000,000100,000,000 + 1 + 1 - 100,000,000100,000,000+1+1−100,000,000? In your head, you instantly get 222. But a computer might tell you the answer is 000. How can this be? The computer uses a system called ​​floating-point arithmetic​​. Think of it as a form of scientific notation with a fixed number of significant digits. Let's say our computer can only store 3 significant digits. The number 100,000,000100,000,000100,000,000 is written as 1.00×1081.00 \times 10^81.00×108. The number 111 is 1.00×1001.00 \times 10^01.00×100.

When the computer tries to add 1.00×1081.00 \times 10^81.00×108 and 1.00×1001.00 \times 10^01.00×100, it must first align the decimal points. The number 111 becomes 0.00000001×1080.00000001 \times 10^80.00000001×108. The sum is 1.00000001×1081.00000001 \times 10^81.00000001×108. But alas, our machine only keeps 3 significant digits, so it rounds the result... back to 1.00×1081.00 \times 10^81.00×108. The tiny number 111 has been completely washed away, a phenomenon called ​​swamping​​. So, the computer calculates ((108+1)+1)−108((10^8 + 1) + 1) - 10^8((108+1)+1)−108 as (108+1)−108(10^8 + 1) - 10^8(108+1)−108, which becomes 108−108=010^8 - 10^8 = 0108−108=0.

But what if we re-order the calculation? What if we do (108−108)+(1+1)(10^8 - 10^8) + (1 + 1)(108−108)+(1+1)? The first part is 000, the second is 222. The answer is 222. We got two different answers, 000 and 222, just by changing the order of addition! This is a shocking revelation: ​​floating-point addition is not associative​​. The comfortable rules of high school algebra do not apply here. This isn't a bug; it's a fundamental property of the finite world we are operating in. Cleverly re-ordering operations to, for example, subtract large numbers from each other first before they can swamp smaller ones, is a crucial art in numerical programming.

This strange world even has a special value for results that make no sense: ​​NaN​​, which stands for "Not a Number." What's the square root of −1-1−1? NaN. What's zero divided by zero? NaN. A NaN is not a bug to be feared, but a feature to be respected. It is an honest signal that something has gone mathematically awry. According to the standard rules of floating-point arithmetic (IEEE 754), any operation involving a NaN produces another NaN. It's like a drop of poison that contaminates everything it touches. If you are summing a million numbers and one of them is NaN, your final sum will be NaN. This is wonderfully useful! It's an alarm bell that rings loudly, preventing you from unknowingly trusting a result that has been corrupted by a mathematical impossibility somewhere deep inside a complex calculation. The worst mistake is not getting a NaN; it's trying to "fix" it by, say, replacing it with zero, and then getting a plausible-looking but silently wrong finite answer.

Taming Infinity with Finite Steps

Now that we are aware of the shaky ground of computer numbers, how can we possibly hope to perform the elegant operations of calculus, which are built on the concepts of limits and infinity? How do we calculate the area under a curve, ∫f(x)dx\int f(x)dx∫f(x)dx, when we can't even add numbers without worry?

The answer is that we don't try to be perfect. We approximate. But—and this is the genius of it—we create methods to precisely measure our own imperfection.

Consider the task of finding the area under a curve f(x)f(x)f(x) from point aaa to bbb. The simplest idea is the ​​Midpoint Rule​​: just draw a rectangle whose height is the value of the function at the midpoint of the interval, m=(a+b)/2m = (a+b)/2m=(a+b)/2, and whose width is (b−a)(b-a)(b−a). The area is then simply (b−a)f(m)(b-a) f(m)(b−a)f(m). This seems crude, but here is the magic. Using the tools of calculus (specifically, Taylor's theorem), we can derive a formula for the error we are making! For a reasonably smooth function, the error is given by E=(b−a)324f′′(c)E = \frac{(b-a)^3}{24} f''(c)E=24(b−a)3​f′′(c), where f′′f''f′′ is the second derivative (the curvature) of the function at some point ccc in the interval.

This formula is a revelation. It tells us that the error depends very strongly on the width of the interval—it shrinks with the cube of the width. Halving our rectangle's width doesn't just halve the error; it reduces it by a factor of eight! It also tells us the error is proportional to the function's curvature, f′′f''f′′. If the function is a straight line, its curvature is zero, and the Midpoint Rule gives the exact answer, as we'd expect.

This knowledge is not just academic; it allows us to build smart, ​​adaptive algorithms​​. Imagine you are driving a car through a landscape representing your function. Where the road is straight and flat (low curvature), you can go fast. Where it's curvy and mountainous (high curvature), you must slow down to be safe. An adaptive numerical method does exactly this. It takes a step of size holdh_{old}hold​ and estimates the error it just made, ϵold\epsilon_{old}ϵold​. If this error is much smaller than our desired tolerance, the algorithm knows the landscape is smooth and proposes a larger next step, hnewh_{new}hnew​. If the error is too large, it knows the terrain is rough, so it discards the result and tries again with a smaller step. The error formula tells us precisely how to adjust: if a method's error scales with step size as ϵ∝hp+1\epsilon \propto h^{p+1}ϵ∝hp+1, we can calculate the ideal next step size with hnew=hold(toleranceϵold)1/(p+1)h_{new} = h_{old} (\frac{\text{tolerance}}{\epsilon_{old}})^{1/(p+1)}hnew​=hold​(ϵold​tolerance​)1/(p+1). This is an algorithm that feels its way through the problem, working hard only where necessary and saving immense amounts of computation.

The Art of Choosing the Right Tool

As our problems become more complex, we often find there are multiple algorithms that claim to do the same job. Which one should we choose? The answer often lies not in their speed, but in their ​​numerical stability​​—their resilience in the face of the rounding errors we saw earlier.

A classic example comes from computing eigenvalues, the special numbers that characterize the behavior of matrices. Two famous iterative methods for this are the LR and the ​​QR algorithm​​. In the perfect world of exact mathematics, both methods generate a sequence of matrices that converge to reveal the eigenvalues. They are both based on a ​​similarity transformation​​, Ak+1=S−1AkSA_{k+1} = S^{-1}A_k SAk+1​=S−1Ak​S, which preserves eigenvalues.

The difference is in the matrix SSS. The LR algorithm uses a triangular matrix LkL_kLk​, while the QR algorithm uses an ​​orthogonal matrix​​ QkQ_kQk​. What is an orthogonal matrix? Geometrically, it represents a rigid motion, like a rotation or a reflection. It doesn't stretch, shear, or distort space. When you apply it to a problem, it doesn't amplify errors. Its "condition number," a measure of error amplification, is a perfect 1. The LkL_kLk​ matrix from the LR algorithm, however, can represent a severe shearing transformation. It can be wildly ​​ill-conditioned​​, meaning it can take tiny, unavoidable rounding errors and blow them up to catastrophic proportions, rendering the result meaningless. The QR algorithm, by sticking to stable, rigid rotations, is numerically robust and is the cornerstone of modern eigenvalue computation for this very reason.

This idea of a problem's inherent sensitivity is captured by the ​​condition number​​. Let's say we are solving the system of equations Ax=bAx=bAx=b. We use an iterative method and it proudly reports a tiny "residual"—that is, the quantity r=b−Axkr = b - A x_kr=b−Axk​ is very small for our approximate solution xkx_kxk​. We might think we are done. But we have been deceived! The quantity we care about is the true error, x−xkx - x_kx−xk​. The relationship between what we can measure (the residual) and what we want to know (the error) is governed by the condition number of the matrix AAA, denoted κ(A)\kappa(A)κ(A). The rule is approximately:

Relative Error≤κ(A)×Relative Residual\text{Relative Error} \le \kappa(A) \times \text{Relative Residual}Relative Error≤κ(A)×Relative Residual

If κ(A)\kappa(A)κ(A) is large, the matrix is ill-conditioned. This means a tiny relative residual can coexist with a gigantic relative error. The matrix acts as a massive amplifier for uncertainty. Imagine a problem where the condition number is 10810^8108. Your algorithm might report a residual of 10−710^{-7}10−7, which looks fantastic, but your actual solution could still be 10%10\%10% off! Knowing the condition number of your problem is just as important as the solution itself; it tells you how much you can trust your answer.

The Symphony of Parallelism

The great triumphs of modern scientific computing—from climate modeling to drug discovery—are not achieved by a single processor thinking very hard, but by a symphony of thousands or even millions of processors working in concert. But making them work together efficiently is a deep and challenging art.

The first, and most fundamental, principle of parallel computing is ​​Amdahl's Law​​. It's a dose of sobering reality. Suppose you have a task, and you find that 80%80\%80% of it can be perfectly split among any number of processors (the parallel part), but 20%20\%20% of it is inherently sequential—it must be done by one processor alone (the serial part). You might think that with a million processors, you could get a nearly million-fold speedup. Amdahl's Law says no. No matter how many processors you use, the total time will never be less than the time it takes to run that stubborn 20%20\%20% serial part. The maximum possible speedup is limited to 1/(serial fraction)1 / (\text{serial fraction})1/(serial fraction), which in this case is 1/0.2=51 / 0.2 = 51/0.2=5. You have a million-processor supercomputer, and you can only make your code five times faster! This law forces us to hunt down and minimize every last bit of serial work.

But the story gets even more subtle. A parallel computation is not just about work; it's about ​​communication​​. Imagine a team of people trying to solve a puzzle. If they can all work on their own pieces without talking, they will be very efficient. But what if they constantly need to stop and have a meeting to decide on the next step?

This is exactly the dilemma faced in many large-scale matrix computations. A numerically very safe procedure called "full pivoting" requires, at every single step of the calculation, a global search for the largest number in the remaining matrix. On a supercomputer where the matrix is distributed across thousands of processors, this means every processor must stop computing, report its local maximum value, participate in a global "conference call" to find the overall maximum, and wait for the result before proceeding. This communication and synchronization creates a massive bottleneck that stalls the entire machine. A slightly less stable but still effective strategy, "partial pivoting," only requires a local conversation among a small group of processors. On a parallel machine, this is vastly more efficient. The lesson is profound: in high-performance computing, the cost of talking is often far greater than the cost of thinking.

A New Kind of Scientific Rigor

This journey through the principles of scientific computing leads us to a final, and perhaps most important, destination: a new understanding of what it means to be scientifically rigorous in the computational era.

Traditionally, we might test a piece of scientific software by running it on a few example cases and checking if the answers look reasonable. But as we've seen, this is a dangerous game. An algorithm might work for 99 inputs but fail catastrophically on the 100th. The modern paradigm demands a higher standard: ​​formal verification​​. Instead of just testing, we aim to prove that our code is correct. This involves writing a formal contract for our code, with ​​preconditions​​ (what must be true about the inputs) and ​​postconditions​​ (what the code guarantees about the output). We then use mathematical logic and automated theorem provers to demonstrate that if the preconditions are met, the postconditions will always be satisfied, for every possible valid input. This proof can even include a rigorous, guaranteed bound on the numerical error produced by floating-point arithmetic. This shifts computation from an empirical craft to a verifiable science, producing results with a level of trust and reproducibility that testing alone can never achieve.

This demand for total awareness extends to the very end of the scientific process: visualization. We run our complex simulation, we get our data, and we make a plot to see the result. Is the distribution of our data unimodal (one peak) or bimodal (two peaks)? The answer might seem obvious from the picture. But the picture itself is the output of another algorithm. Changing the bin width of a histogram, the smoothing parameter of a density estimate, or even the random seed used for a bit of visual "jitter" can dramatically change the shape of the plot, and with it, our scientific conclusion.

The ultimate lesson is this: the entire computational pipeline—from the choice of floating-point representation, to the algorithm's stability, to the parallelization strategy, to the parameters of the final plot—is the scientific instrument. To be truly rigorous and reproducible, we must understand, control, document, and be prepared to justify every single choice we make. This is the great challenge and the profound beauty of scientific computing: it is not just about getting the right answers, but about building a complete, transparent, and verifiable path to knowledge itself.

Applications and Interdisciplinary Connections

Alright, we've spent some time looking under the hood, fiddling with the gears and wires of scientific computing. We've talked about how to represent numbers, how to make algorithms efficient, and how to tame the errors that creep in. That's all essential, but it's like learning the rules of grammar without ever reading a beautiful poem. The real fun, the real point of all this, is to see what we can do with it. What stories can we tell? What worlds can we explore? Scientific computing is not just a tool for getting answers; it has become a third pillar of scientific inquiry, a partner to theory and experiment. It is our digital microscope for seeing the unseeable, our time machine for watching galaxies evolve, and our sandbox for playing with the very laws of nature.

The Digital Laboratory: Simulating the Fabric of Reality

Imagine you're an engineer designing a new turbine blade. You want to know how heat will flow through it under extreme conditions. Before the age of computers, you had two choices: build a physical prototype (expensive and slow) or try to solve the brutally complex equations of heat flow with pencil and paper (often impossible). Today, we have a third way. We can build a virtual turbine blade inside the computer.

The first step is to turn the continuous reality of the blade into something a computer can handle: a collection of discrete points or volumes. This process, called discretization, transforms the elegant partial differential equation for heat flow, ∇⋅(k(x)∇T(x))+q(x)=0\nabla \cdot (k(\boldsymbol{x}) \nabla T(\boldsymbol{x})) + q(\boldsymbol{x}) = 0∇⋅(k(x)∇T(x))+q(x)=0, into a giant system of linear equations, which we can write in the familiar form AT=bA\boldsymbol{T} = \boldsymbol{b}AT=b. Here, T\boldsymbol{T}T is a vector representing the temperature at thousands, or even millions, of points in our virtual blade. The matrix AAA describes how heat at one point affects its neighbors. Setting this up correctly is a craft in itself, involving careful choices about how to represent the geometry, boundary conditions, and material properties. Modern software toolkits like PETSc or Trilinos provide the powerful machinery for this, but it is the scientist who must correctly feed the beast, specifying everything from the data structures to the properties of the matrix—for instance, telling the solver that the matrix is symmetric and positive definite, which allows it to use much faster solution methods.

Once you have your enormous system of equations, the real race begins. If you have a million grid points, your matrix AAA could conceptually have a million times a million entries! Storing it directly is out of the question. But we know from the physics that temperature at one point is only directly affected by its immediate neighbors. This means the matrix is sparse—mostly filled with zeros. The challenge is to solve this system efficiently. Let's say we have NNN grid points along each side of a square, giving us N2N^2N2 unknowns. A straightforward "direct" solver, a bit like a brute-force version of Gaussian elimination, might take a number of operations that scales like (N2)2=N4(N^2)^2 = N^4(N2)2=N4, or with some cleverness, like nested dissection, as N3N^3N3. Now, an iterative method like multigrid comes along and, through a beautiful process of solving the problem on progressively coarser grids, gets the job done in a time that scales like N2N^2N2. What's the difference? If you double NNN, the N3N^3N3 method becomes eight times slower, while the N2N^2N2 method only becomes four times slower. For large NNN, this is the difference between getting your answer this afternoon and waiting until next week. This choice of algorithm is not a minor detail; it determines what is computationally possible and what remains forever out of reach.

But even with the cleverest algorithm, the digital world has its own peculiar traps. A computer does not store real numbers with infinite precision. It makes tiny rounding errors in every single calculation. Usually, these are harmless. But sometimes, they can conspire to create a disaster. Imagine studying the stresses inside a block of steel under immense pressure, like at the bottom of the ocean. The stress is almost purely hydrostatic (equal in all directions), with only tiny deviations that determine whether the material will bend or break. To calculate this yield condition, you might need to compute the difference between two principal stresses, say σ1\sigma_1σ1​ and σ2\sigma_2σ2​. Both are enormous numbers, very close to each other. When the computer subtracts them, the leading digits cancel out, and what's left is mostly the rounding error from the original numbers! This phenomenon, called catastrophic cancellation, can completely destroy your result. The art of scientific computing, then, involves not just choosing a fast algorithm, but choosing a numerically stable one, perhaps using a different but mathematically equivalent formula that avoids such subtractions, or employing high-precision arithmetic to keep the rounding errors at bay.

The power of this digital laboratory extends all the way down to the quantum realm. When materials scientists simulate an alloy containing an element like cerium, their quantum mechanical calculations might report that a cerium atom has an electron configuration of, say, 4f0.94f^{0.9}4f0.9. What on earth does it mean to have nine-tenths of an electron in an orbital? It's not that the electron has split apart! It's a beautiful glimpse into the weirdness of quantum mechanics. The calculation is telling us that the true state of the atom is a quantum superposition—a rapid fluctuation or an "average" state—that is 90% of the time in the 4f14f^14f1 configuration and 10% of the time in the 4f04f^04f0 configuration, as the electron flickers back and forth into the surrounding metal. The computational result is not just a number; it's a window into the dynamic, probabilistic nature of the quantum world, a concept that would be impossible to "see" otherwise.

The Engine of Discovery: Organizing Complexity and Data

Simulation is one side of the coin. The other is using computation to make sense of the world, whether it's the messy data from a laboratory experiment or the bewildering complexity of a logistical problem.

A pharmacologist, for instance, might measure the effect of a new drug at several different concentrations. The data points might be sparse and irregularly spaced, because experiments are difficult and sometimes unpredictable. How do you find the average effect of the drug over a range of concentrations? This is precisely a question of finding the area under a curve—an integral. But we don't have a nice, clean function; we have a handful of data points. By connecting the dots with straight lines (a piecewise linear model) and calculating the area of the resulting trapezoids, we can get a robust estimate. This simple trapezoidal rule, often one of the first things taught in numerical analysis, becomes a powerful tool for turning raw experimental results into scientifically meaningful quantities.

Computation also gives us tools to tame logistical chaos. Imagine you are organizing a university skills fair. Several companies are coming, and each wants to interview students at a specific set of skill stations (Cloud Computing, Data Science, etc.). You only have a limited number of time slots, and the constraint is that no single company should have all of its desired stations scheduled in the same time slot, because their recruiter can't be in two places at once. How many time slots do you need? This sounds like a messy puzzle, but it can be elegantly translated into a problem in abstract mathematics: the coloring of a hypergraph. The skill stations are the vertices of the hypergraph, and each company's list of desired skills forms a "hyperedge." The problem then becomes: what is the minimum number of colors (time slots) needed to color the vertices so that no hyperedge is monochromatic (all one color)? This abstract formulation allows us to bring powerful algorithmic machinery to bear on a problem that would be a nightmare to solve by trial and error.

Many of the most powerful computational techniques, from financial modeling to particle physics, rely on the Monte Carlo method—the idea of using randomness to find answers. But what happens when you run such a simulation on a massive supercomputer with thousands of processors? You need each processor to have its own independent stream of random numbers. If two processors accidentally use the same or overlapping sequences, they are no longer independent. They might become secretly correlated, poisoning your entire result in a way that is incredibly difficult to detect. So, how do you hand out "randomness" to thousands of processors? You use number theory! The pseudo-random number generators used in computers are not truly random; they are deterministic sequences generated by modular arithmetic, like xt+1≡axt(modm)x_{t+1} \equiv a x_t \pmod{m}xt+1​≡axt​(modm). These sequences are so long that they appear random. Using the properties of modular exponentiation, we can calculate exactly where in the sequence the billionth number will be without computing all the numbers in between. This allows us to give each processor its own unique starting seed, ensuring that their streams of "random" numbers are completely disjoint. It's a beautiful application of pure mathematics to solve a profoundly practical problem in high-performance computing.

Taming the Behemoth: Frontiers of Modern Computation

As our ambitions grow, so does the complexity of our simulations. We don't just want to simulate one turbine blade; we want to find the optimal blade by exploring thousands of different designs, materials, and operating conditions. Running a full high-fidelity simulation for every single possibility is computationally impossible. This "curse of dimensionality" is a major barrier. The frontier of research here lies in creating reduced-order models. The idea is to run a few expensive, high-fidelity simulations—collecting "snapshots" of the solution—and then use mathematical techniques like ​​Proper Orthogonal Decomposition (POD)​​ to extract the most important underlying patterns, or "modes." These modes form a highly efficient basis, a kind of computational shorthand, for representing the solution. We can then build a cheap, fast "surrogate model" that gives us nearly the same answer as the full simulation but runs in a fraction of the time. This allows us to explore vast parameter spaces, perform uncertainty quantification, and even use simulations for real-time control. Techniques like POD and the related ​​Proper Generalized Decomposition (PGD)​​ are like creating a distilled map of a vast and complex landscape.

The complexity also arises from the dynamics of the system itself. Imagine simulating the airflow around a bird in flight or a propeller spinning in water. The geometry is constantly changing. On a parallel computer, the region of intense computation—the "cut cells" right at the moving boundary—is constantly migrating from the domain of one processor to another. If we use a static decomposition of the work, some processors will be swamped with these expensive cut cells while others sit mostly idle, waiting for the slowest one to finish. This is incredibly inefficient. The solution is dynamic load balancing. The processors must constantly communicate, assess the workload, and re-distribute the problem among themselves on the fly. It's like a team of workers constantly reorganizing to tackle a moving hotspot of activity, ensuring that the overall effort remains balanced and efficient. This is essential for tackling the grand challenges of computational science, from climate modeling to astrophysics.

Finally, we come to a most profound question. We've seen how computation can simulate physical reality. But are there limits? Is there anything in our universe that a classical computer, governed by the laws of classical physics, is fundamentally incapable of simulating? The answer, astonishingly, seems to be yes. Experiments in quantum mechanics reveal correlations between distant particles that are stronger than any classical theory can allow. The famous Bell inequality, and its experimental test known as the CHSH inequality, provides a strict mathematical bound that any simulation based on "local realism" must obey. This means any classical simulation where information is local and outcomes are determined by pre-existing "hidden variables" (even random ones) cannot reproduce the correlations we see in nature. Quantum mechanics routinely violates this bound. For example, a set of observed correlations might yield a value of 222\sqrt{2}22​ for the CHSH expression, where the classical limit is just 222. This tells us that our universe has a non-local character that cannot be captured by a classical algorithm based on local information exchange. To simulate such a system faithfully, we would need a computer that itself harnesses these quantum effects—a quantum computer.

And so our journey through the applications of scientific computing brings us to the very edge of what is computable and what is real. It is a field that is constantly evolving, driven by our insatiable curiosity to understand the world at every scale, from the intricate dance of drug molecules to the grand architecture of the cosmos, and even to the strange and beautiful logic of quantum reality itself. The adventure is far from over.