try ai
Popular Science
Edit
Share
Feedback
  • Propagation of Uncertainty

Propagation of Uncertainty

SciencePediaSciencePedia
Key Takeaways
  • Propagation of uncertainty is the mathematical framework for determining the final uncertainty in a calculated quantity based on the uncertainties of its input measurements.
  • Uncertainties from independent measurements combine in quadrature (like the Pythagorean theorem), meaning their contributions to the total variance are summed, not their absolute values.
  • A constant measurement uncertainty can result in a variable and sometimes dramatically large uncertainty in a calculated quantity, depending on the mathematical function used for transformation.
  • Uncertainty analysis serves as a critical tool for designing robust experiments and selecting superior data analysis methods, such as using weighted regression to handle non-uniform errors.

Introduction

Every measurement in experimental science is an approximation, a value surrounded by a cloud of uncertainty. But what happens when we use these "fuzzy" numbers in a formula to calculate a new result? The individual uncertainties don't simply vanish; they combine and propagate, creating a new uncertainty in the final answer. This article tackles this fundamental challenge head-on, providing a comprehensive guide to the propagation of uncertainty—a set of mathematical rules for predicting how errors compound. We will begin in the "Principles and Mechanisms" chapter by dissecting the core formula, from the simplest single-variable case to the "Pythagorean theorem of errors" for multiple independent measurements, and finally to the master equation that accounts for correlated errors. Subsequently, the "Applications and Interdisciplinary Connections" chapter will showcase the profound impact of this framework, demonstrating its use in fields ranging from analytical chemistry and particle physics to computational modeling and quantum metrology. Through this journey, you will learn not just how to calculate an error bar, but how to use uncertainty as a powerful tool for scientific discovery.

Principles and Mechanisms

Imagine you are trying to measure the length of a table with a ruler. You squint, you line it up, and you read "150.2 centimeters." But is it exactly 150.2? Of course not. Maybe it's 150.21, or 150.19. Your measurement has a small "wiggle" in it, a region of uncertainty. This is the fundamental truth of all experimental science: every measurement we make is an approximation. It's not a single, perfect number, but a value with a cloud of uncertainty around it.

Now, suppose you want to calculate the area of this tabletop. You measure the width, which also has its own uncertainty. You then plug these two slightly fuzzy numbers into the formula: Area = length × width. What happens? The fuzziness doesn't just disappear. The individual wiggles from your length and width measurements combine and propagate into a new, larger wiggle in your final calculated area. The goal of ​​propagation of uncertainty​​ is to be a master fortune-teller for these wiggles. It's a set of rules that allows us to predict the uncertainty in a calculated result based on the uncertainties of the inputs. It’s the mathematics of how ignorance compounds.

The Simplest Case: One Wiggle and its Shadow

Let's start with the most basic scenario. You measure a single quantity, let's call it xxx, with an uncertainty of δx\delta xδx. You then calculate a new quantity, f(x)f(x)f(x), that depends only on your measurement. How big is the new uncertainty, δf\delta fδf?

Think of it like walking on a hilly landscape. Your position on the map is xxx, and the altitude is f(x)f(x)f(x). The uncertainty δx\delta xδx is a small wobble in your map position. How much does this wobble affect your altitude? If you're on a very steep part of the hill, even a tiny wobble in position can lead to a huge change in altitude. If you're on a flat plain, the same wobble might barely change your altitude at all.

The "steepness" of the function is its derivative, dfdx\frac{df}{dx}dxdf​. So, to a very good approximation, the resulting uncertainty δf\delta fδf is simply the initial uncertainty δx\delta xδx multiplied by how sensitive the function is to changes in xxx. Mathematically, we write this as:

δf≈∣dfdx∣δx\delta f \approx \left| \frac{df}{dx} \right| \delta xδf≈​dxdf​​δx

We take the absolute value because we don't care if the wiggle is up or down; we just care about its size.

For instance, if you measure the radius of a circular filter paper to be rrr with an uncertainty of u(r)u(r)u(r), the area is A=πr2A = \pi r^2A=πr2. The "steepness" of this function is dAdr=2πr\frac{dA}{dr} = 2\pi rdrdA​=2πr. So, the uncertainty in the area is u(A)=(2πr)u(r)u(A) = (2\pi r) u(r)u(A)=(2πr)u(r). Notice something interesting: the uncertainty in the area depends not just on the uncertainty in the radius, but on the value of the radius itself! A bigger circle is more sensitive to a small error in its radius.

This same principle applies whether the function is a square, a reciprocal, or something more exotic. If we measure the refractive index nnn of an optical fiber to determine the speed of light within it, v=c/nv = c/nv=c/n, the derivative is dvdn=−c/n2\frac{dv}{dn} = -c/n^2dndv​=−c/n2. A small uncertainty δn\delta nδn in the refractive index results in an uncertainty in the speed of δv=∣−cn2∣δn=cn2δn\delta v = |\frac{-c}{n^2}| \delta n = \frac{c}{n^2} \delta nδv=∣n2−c​∣δn=n2c​δn.

The Magnifying Glass: When Constant Errors Aren't

Here is where things get truly beautiful and a little bit counter-intuitive. Imagine using a spectrophotometer, a device that measures how much light a solution absorbs. The machine measures ​​transmittance​​, TTT (the fraction of light that gets through), and it has a constant uncertainty, say σT=0.0025\sigma_T = 0.0025σT​=0.0025, no matter what sample you put in. From this, we calculate the chemically more useful quantity, ​​absorbance​​, using the formula A=−log⁡10(T)A = -\log_{10}(T)A=−log10​(T).

Let's apply our rule. The derivative is dAdT=−1Tln⁡(10)\frac{dA}{dT} = -\frac{1}{T \ln(10)}dTdA​=−Tln(10)1​. So the uncertainty in our calculated absorbance is:

σA=∣−1Tln⁡(10)∣σT=σTln⁡(10)1T\sigma_A = \left| \frac{-1}{T \ln(10)} \right| \sigma_T = \frac{\sigma_T}{\ln(10)} \frac{1}{T}σA​=​Tln(10)−1​​σT​=ln(10)σT​​T1​

Look at this result! Even though the instrument's uncertainty σT\sigma_TσT​ is a constant, the uncertainty in our final answer, σA\sigma_AσA​, is proportional to 1/T1/T1/T. If your sample is very transparent (high TTT, close to 1), the uncertainty in absorbance is small. But if your sample is very dark and opaque (low TTT, close to 0), the 1/T1/T1/T term becomes huge, and the uncertainty σA\sigma_AσA​ explodes!

This is a profound lesson. A chemist measuring two solutions, one with 85% transmittance and one with 15%, will find that the absorbance uncertainty for the darker solution is over five times greater than for the clearer one, even though the instrument performed identically in both cases. The propagation of uncertainty formula acts like a magnifying glass, revealing that some measurement regimes are inherently less trustworthy than others. It's not just about calculating a final error bar; it's a guide to designing smarter experiments.

The Pythagorean Theorem of Errors

What happens when our calculation depends on two or more independent measurements? Suppose you are calculating the acceleration aaa of a block on an inclined plane, given by a=gsin⁡θa = g \sin\thetaa=gsinθ. You measure the acceleration due to gravity, ggg, with some uncertainty δg\delta gδg, and you measure the angle of the incline, θ\thetaθ, with its own uncertainty δθ\delta\thetaδθ.

The key word here is independent. The error you made in measuring ggg has nothing to do with the error you made in measuring θ\thetaθ. One might be a bit high, while the other is a bit low. They don't conspire. Because they are uncorrelated, the uncertainties don't simply add up. Instead, they add like the sides of a right-angled triangle—in quadrature. This is the Pythagorean theorem of errors.

For a function f(x,y)f(x, y)f(x,y), the total variance (the square of the uncertainty) is the sum of the individual variances contributed by each variable:

(δf)2=(∂f∂x)2(δx)2+(∂f∂y)2(δy)2(\delta f)^2 = \left( \frac{\partial f}{\partial x} \right)^2 (\delta x)^2 + \left( \frac{\partial f}{\partial y} \right)^2 (\delta y)^2(δf)2=(∂x∂f​)2(δx)2+(∂y∂f​)2(δy)2

The terms ∂f∂x\frac{\partial f}{\partial x}∂x∂f​ and ∂f∂y\frac{\partial f}{\partial y}∂y∂f​ are the partial derivatives. They represent the "steepness" of the function in the xxx direction and the yyy direction, respectively. Each term in the sum is the contribution to the total wiggle from one of the input wiggles.

For the sliding block, this becomes (δa)2=(sin⁡θ)2(δg)2+(gcos⁡θ)2(δθ)2(\delta a)^2 = (\sin\theta)^2 (\delta g)^2 + (g \cos\theta)^2 (\delta \theta)^2(δa)2=(sinθ)2(δg)2+(gcosθ)2(δθ)2. We can see precisely how much each measurement contributes to our final uncertainty. (A quick but vital note: when derivatives involve angles, the uncertainty in the angle, δθ\delta\thetaδθ, must be in radians!)

This "addition in quadrature" is especially clear when we look at ​​relative uncertainties​​. For a quantity like the precession of a gyroscope, Ω=τ/Lspin\Omega = \tau / L_{\text{spin}}Ω=τ/Lspin​, where we have measurements for torque τ\tauτ and angular momentum LspinL_{\text{spin}}Lspin​, the math can be simplified. It turns out that the square of the relative uncertainty in Ω\OmegaΩ is the sum of the squares of the relative uncertainties in τ\tauτ and LspinL_{\text{spin}}Lspin​:

(δΩΩ)2=(δττ)2+(δLspinLspin)2\left( \frac{\delta \Omega}{\Omega} \right)^2 = \left( \frac{\delta \tau}{\tau} \right)^2 + \left( \frac{\delta L_{\text{spin}}}{L_{\text{spin}}} \right)^2(ΩδΩ​)2=(τδτ​)2+(Lspin​δLspin​​)2

This elegant form holds for any function that is a product or division of variables. It tells us that if you have a 1% error in torque and a 2% error in angular momentum, your final relative error isn't 3%, but rather (0.01)2+(0.02)2≈2.2%\sqrt{(0.01)^2 + (0.02)^2} \approx 2.2\%(0.01)2+(0.02)2​≈2.2%. The errors partially average out.

A Symphony of Uncertainties: From Particle Physics to Your Lab

The power of this framework is its ability to unite different kinds of uncertainty. In a particle physics experiment, scientists might observe a total of Ntot=155N_{tot} = 155Ntot​=155 events that look like a new particle decay. But they also estimate, from simulations and other data, that there is a background of Nbg=110N_{bg} = 110Nbg​=110 fake events, with an uncertainty on that estimate of δNbg=8\delta N_{bg} = 8δNbg​=8. The number of true signal events is simply Ns=Ntot−NbgN_s = N_{tot} - N_{bg}Ns​=Ntot​−Nbg​.

What is the uncertainty in NsN_sNs​? We have two sources of error. First, the background estimate has its given uncertainty, δNbg\delta N_{bg}δNbg​. Second, the total number of observed events, NtotN_{tot}Ntot​, is a count of random, discrete events. This kind of process is governed by ​​Poisson statistics​​, which has a beautiful, built-in rule: the uncertainty in a count NNN is simply its square root, N\sqrt{N}N​. So, the uncertainty in NtotN_{tot}Ntot​ is 155\sqrt{155}155​.

These two uncertainties—one a systematic estimate, the other a statistical counting error—are independent. So, we can combine them using our Pythagorean rule:

(δNs)2=(uncertainty in Ntot)2+(uncertainty in Nbg)2=(Ntot)2+(δNbg)2=Ntot+(δNbg)2(\delta N_s)^2 = (\text{uncertainty in } N_{tot})^2 + (\text{uncertainty in } N_{bg})^2 = (\sqrt{N_{tot}})^2 + (\delta N_{bg})^2 = N_{tot} + (\delta N_{bg})^2(δNs​)2=(uncertainty in Ntot​)2+(uncertainty in Nbg​)2=(Ntot​​)2+(δNbg​)2=Ntot​+(δNbg​)2

Plugging in the numbers, (δNs)2=155+82=219(\delta N_s)^2 = 155 + 8^2 = 219(δNs​)2=155+82=219, so the final uncertainty is δNs=219≈14.8\delta N_s = \sqrt{219} \approx 14.8δNs​=219​≈14.8. This single number beautifully synthesizes two fundamentally different kinds of "fuzziness" into one meaningful statement about our confidence in the discovery.

Danger! When Wiggles Explode

Usually, small errors in input lead to small errors in output. But not always. Some calculations are like a house of cards, where a tiny disturbance can bring the whole thing crashing down. This is known as being ​​ill-conditioned​​.

Consider calculating the determinant of a 2×22 \times 22×2 matrix, det⁡(A)=ad−bc\det(A) = ad - bcdet(A)=ad−bc. Now imagine the matrix is "nearly singular," meaning that the product adadad is very, very close to the product bcbcbc. This is like trying to find the tiny difference between two very large, almost identical numbers.

Let's say all our matrix elements a,b,c,da, b, c, da,b,c,d are measured with a small relative uncertainty δ\deltaδ. If we work through the propagation formula, we arrive at a shocking result. The relative uncertainty in the determinant is approximately:

∣Δ(det⁡A)∣∣det⁡(A)∣≈2κδwhereκ=ad+bcad−bc\frac{|\Delta(\det A)|}{|\det(A)|} \approx 2 \kappa \delta \quad \text{where} \quad \kappa = \frac{ad + bc}{ad - bc}∣det(A)∣∣Δ(detA)∣​≈2κδwhereκ=ad−bcad+bc​

The term κ\kappaκ is the "condition number." Since adadad is very close to bcbcbc, the denominator is tiny, and κ\kappaκ is a huge number. Our small initial error δ\deltaδ is being amplified by this enormous factor! If κ=1000\kappa = 1000κ=1000 and your initial measurements are good to 0.1%, your final result for the determinant could be off by 2×1000×0.001=200%2 \times 1000 \times 0.001 = 200\%2×1000×0.001=200%. The answer is complete garbage. This is a terrifying and essential lesson in computation: the propagation of uncertainty formula can warn us when a calculation is unstable and not to be trusted.

Uncertainty as a Compass: Choosing the Better Path

This brings us to one of the most sophisticated uses of uncertainty propagation: as a tool for choosing the best way to analyze our data. In biochemistry, the rate of an enzyme reaction (v0v_0v0​) is often modeled by the Michaelis-Menten equation. To find the key parameters (KMK_MKM​ and VmaxV_{max}Vmax​), scientists have long used a trick called the ​​Lineweaver-Burk plot​​, which turns the equation into a straight line by plotting 1/v01/v_01/v0​ versus 1/[S]1/[S]1/[S].

But is this a good idea? Let's ask our uncertainty formula. Assume the error in measuring the velocity, σv0\sigma_{v_0}σv0​​, is roughly constant. When we transform our y-axis to 1/v01/v_01/v0​, what happens to this error? As we saw with the spectrophotometer, the uncertainty in the transformed variable becomes σ1/v0=σv0/v02\sigma_{1/v_0} = \sigma_{v_0} / v_0^2σ1/v0​​=σv0​​/v02​.

This is a disaster! At very low reaction rates (small v0v_0v0​), which are often the hardest to measure accurately, the error is magnified enormously. A standard linear regression treats all points as equally trustworthy, so these highly uncertain points at low v0v_0v0​ can completely distort the fitted line and give you the wrong enzyme parameters.

The propagation of uncertainty formula not only identifies this problem but also tells you how to fix it. For a proper "weighted" regression, each point should be weighted inversely to its variance. The variance of 1/v01/v_01/v0​ is σ1/v02=σv02/v04\sigma_{1/v_0}^2 = \sigma_{v_0}^2 / v_0^4σ1/v0​2​=σv0​2​/v04​. Therefore, the correct statistical weight for each point is proportional to v04v_0^4v04​! It also suggests why alternative linearizations, like the Hanes-Woolf plot, can be statistically superior because they don't distort the error structure as violently. Uncertainty propagation isn't just a post-mortem; it's a compass that guides us toward more robust methods of discovery.

The Master Equation: When Wiggles Conspire

So far, we have always assumed our initial measurement errors are independent. But what if they're not? What if an error in one measurement makes an error in another one more likely?

Imagine calibrating a sensor. You measure the sensor's response (yyy) for several known concentrations (xxx) and fit a straight line, y=mx+by = mx + by=mx+b, to find the slope mmm and intercept bbb. Now you use this calibration to find an unknown concentration from its measured response yˉx\bar{y}_xyˉ​x​, so x=(yˉx−b)/mx = (\bar{y}_x - b)/mx=(yˉ​x​−b)/m. The uncertainty in xxx depends on the uncertainties in yˉx\bar{y}_xyˉ​x​, bbb, and mmm.

But are the errors in the slope and intercept independent? Almost never! If your data points happen to result in a slightly steeper slope (mmm), they will probably also result in a slightly lower intercept (bbb). The estimates are anti-correlated. This relationship is captured by a statistical quantity called ​​covariance​​, denoted σmb\sigma_{mb}σmb​.

The full, master equation for propagation of uncertainty for a function f(x,y)f(x, y)f(x,y) includes this term:

(δf)2=(∂f∂x)2(δx)2+(∂f∂y)2(δy)2+2∂f∂x∂f∂yσxy(\delta f)^2 = \left( \frac{\partial f}{\partial x} \right)^2 (\delta x)^2 + \left( \frac{\partial f}{\partial y} \right)^2 (\delta y)^2 + 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}(δf)2=(∂x∂f​)2(δx)2+(∂y∂f​)2(δy)2+2∂x∂f​∂y∂f​σxy​

When applied to our calibration problem, this yields the complete expression for the variance in our final answer, a formula that correctly accounts for the uncertainties in the slope, the intercept, the measurement of the unknown, and—crucially—the fact that the slope and intercept uncertainties are intertwined.

This final formula is the grand unification. It is the culmination of our journey, a single mathematical statement that contains all the simpler cases within it. It shows how the wiggles from every source—independent, correlated, statistical, or systematic—flow through the veins of our equations to define the boundaries of what we truly know. Far from being a dreary accounting exercise, the propagation of uncertainty is a deep and powerful principle that reveals the texture of scientific knowledge itself.

Applications and Interdisciplinary Connections

We have spent some time learning the formal rules for how uncertainties combine—the machinery of error propagation. But to what end? Does this mathematical tool have any real bite, or is it merely an academic exercise for satisfying picky lab instructors? The truth, as is so often the case in science, is far more beautiful and far-reaching. The propagation of uncertainty is not just about bookkeeping; it is the very language we use to express our confidence in the knowledge we build from the imperfect world of measurement. It is the thread that connects the chemist’s beaker, the astronomer’s telescope, and the quantum physicist’s interferometer.

Let us begin our journey in a place familiar to any student of science: the laboratory. Imagine you are in a darkened room, carefully aligning lenses and mirrors on an optical bench. Your goal is simple: to determine the radius of curvature of a concave mirror. You measure the position of the object, the position of the real image it forms, and the position of the mirror itself. Each of these measurements, made with a simple ruler, has a small uncertainty. The mirror equation connects these distances to the radius you seek, but how do the small wobbles in your ruler readings translate into the final uncertainty of your answer? The formula for propagation of uncertainty gives us the precise recipe to combine these errors, even accounting for the tricky fact that some of your calculated distances might depend on the same initial measurement, such as the mirror's position. It tells you not just the mirror's radius, but how well you know it.

This same principle is the lifeblood of analytical chemistry. A chemist uses a spectrophotometer to measure how much light a colored solution absorbs, with the goal of determining the concentration of a substance. The final answer depends on the measured absorbance, the path length of the light through the sample, and the substance's molar absorptivity, a known constant. Each of these quantities comes with its own uncertainty—from the instrument's digital readout, the manufacturing tolerance of the glass cuvette, and the reference experiment that determined the constant. The Beer-Lambert law is the physics, but the propagation of uncertainty is the metrology that tells us how these individual uncertainties conspire to limit the precision of our final concentration value.

The principle extends from static properties to the dynamics of change. When studying how fast a pharmaceutical compound degrades, a chemist measures its concentration at the beginning and end of a time interval. From these two points, a rate constant kkk is calculated. But the initial and final concentration measurements are not perfect. The uncertainty in the calculated rate constant—a measure of how confident we are in the drug's stability—is directly determined by propagating the uncertainties from the concentration readings. A similar story unfolds in classical thermodynamics, where determining the molar mass of an unknown substance by seeing how much it elevates a solvent's boiling point (ebulliometry) relies on propagating the uncertainties from three separate measurements: the mass of the solvent, the mass of the solute, and the change in temperature. In every case, the framework gives us a rigorous, quantitative answer to the question, "How trustworthy is this number?"

But the modern scientist's laboratory is often not filled with glassware and optical benches, but with the silent hum of processors running complex simulations. Here too, uncertainty is a central character. Imagine simulating the folding of a protein. We might want to know the free energy difference between two shapes, which tells us which one is more stable. Our simulation provides this by building a histogram, essentially counting how many times the system is found in each shape. But these counts are statistical; they fluctuate. The uncertainty in the final free energy difference we calculate is determined by propagating the statistical uncertainty inherent in those counts—which for a well-behaved simulation is simply the square root of the number of counts in each bin. This allows us to distinguish a real energy barrier from a mere statistical ghost in the machine.

This idea scales up to the most advanced methods in computational science and data analysis. In materials science, researchers use X-ray diffraction to determine the precise arrangement of atoms in a crystal. The raw data is a complex pattern of peaks, which is fed into a sophisticated computer program that refines a structural model to best fit the data. The program doesn't just spit out atomic positions; it also calculates their uncertainties. How? Deep within the algorithm, it calculates a "normal matrix" that describes how sensitive the fit is to each parameter. The propagation of uncertainty formalism shows that the variance of any given parameter, like the length of a chemical bond, is directly proportional to a diagonal element of the inverse of this matrix. In the massive computational screening of new materials, where thousands of compounds are evaluated by computers, this same logic allows us to propagate the known uncertainties from our approximate quantum mechanical models to estimate the reliability of a predicted property, like a material's total energy. Without uncertainty propagation, these powerful computational tools would be flying blind.

Having seen its power on the lab bench and inside the computer, let us now cast our gaze outward, to the grand scales of the cosmos and the bewildering beauty of chaos. When observing the swirling patterns of a heated fluid or the erratic behavior of a stock market, we are in the realm of chaotic systems. These systems are characterized by "strange attractors," complex, fractal objects in phase space whose dimensionality is often not an integer. The Kaplan-Yorke dimension provides an estimate for this fractal dimension based on the system's Lyapunov exponents, which measure the rate of divergence of nearby trajectories. But these exponents are measured from experimental data and have uncertainties. How confident can we be in our calculated dimension? Once again, a straightforward application of error propagation gives us the answer, allowing us to quantify the uncertainty in the very "strangeness" of the attractor we are studying.

Perhaps the most triumphant application of this thinking in history was in the confirmation of Einstein's General Relativity. The theory predicted that the elliptical orbit of Mercury should not be perfectly closed, but should precess by a tiny, specific amount each century. Astronomers had known of an excess precession for decades, but their measurements had uncertainties. Einstein's theory predicted a value that fell squarely within the error bars of the observed excess. The agreement between prediction and observation, including their uncertainties, was a watershed moment for science. Today, as we discover planets around other stars, we can apply the same principle. The predicted precession of an exoplanet's orbit depends on its star's mass and the orbit's size and eccentricity. By propagating the observational uncertainties in these orbital parameters, we can calculate the uncertainty in the predicted precession, setting a clear target for future telescopes that might one day measure this effect and test Einstein's theory in distant solar systems.

Finally, we arrive at the ultimate frontier: the quantum realm. Here, uncertainty is not a nuisance born of imperfect instruments, but a fundamental, irreducible feature of reality, famously encapsulated in Heisenberg's Uncertainty Principle. It might seem that our classical error propagation formula would have little to say here. But the opposite is true. The formalism provides the precise tool to analyze the limits of measurement. In the field of quantum metrology, physicists design clever experiments to measure a quantity, like a tiny phase shift ϕ\phiϕ, with the highest possible precision. One scheme involves preparing NNN particles in a fragile, entangled "GHZ" state. The phase is imprinted on the state, and a final measurement is made. The uncertainty in the estimated phase is found using the exact same error propagation formula we have been discussing, relating the variance of the final measurement to the rate of change of its expectation value. When we turn the crank on this calculation, a remarkable result emerges: the uncertainty in the phase, Δϕ\Delta\phiΔϕ, scales as 1/N1/N1/N. This is the "Heisenberg Limit," a fundamental ceiling on precision that is dramatically better than the 1/N1/\sqrt{N}1/N​ scaling of any classical strategy. Here, we see the propagation of uncertainty formula not as a tool for tracking our own clumsiness, but as a lens through which we can perceive the ultimate limits imposed by the laws of nature itself.