try ai
Popular Science
Edit
Share
Feedback
  • Stationary Process

Stationary Process

SciencePediaSciencePedia
Key Takeaways
  • A process is strictly stationary if its full probability distribution is time-invariant, while weak stationarity requires only a constant mean, variance, and lag-dependent autocovariance.
  • The autocovariance function (ACF) captures the internal correlation structure of a stationary process, depending only on the time lag between points.
  • Many non-stationary processes, like random walks, can be transformed into stationary ones through methods like differencing, enabling stable modeling and forecasting.
  • Stationarity is a key assumption for modeling systems in statistical equilibrium across diverse fields, including engineering, economics, signal processing, and biology.

Introduction

In the study of systems that evolve randomly over time, known as stochastic processes, a fundamental challenge arises: how can we model and predict behavior that is inherently uncertain? Some processes change their fundamental character as time progresses, while others exhibit a form of statistical consistency. The concept of ​​stationarity​​ provides the theoretical foundation for identifying and analyzing this consistency. It allows us to distinguish between processes in a state of statistical equilibrium and those that are constantly evolving, addressing the critical knowledge gap between unpredictable fluctuation and stable, modelable behavior.

This article serves as a comprehensive guide to this cornerstone of time series analysis. We will first explore the formal definitions and properties in the ​​Principles and Mechanisms​​ chapter, untangling the crucial difference between the rigorous ideal of strict stationarity and the practical power of weak stationarity. We will examine the key tools for characterizing these processes, such as the autocovariance and autocorrelation functions. Following this, the ​​Applications and Interdisciplinary Connections​​ chapter will demonstrate how this abstract concept becomes a vital tool, enabling the design of stable control systems, the forecasting of economic trends, the analysis of biological evolution, and the discovery of order within chaos. By the end, you will understand not just what a stationary process is, but why it is one of the most powerful ideas for making sense of a random world.

Principles and Mechanisms

Imagine you are standing by a wide, flowing river. At some points, the river is a raging torrent, carving new paths through the landscape. At others, it settles into a calm, steady flow, its character seemingly unchanged day after day. A stochastic process is much like this river; some are wild and unpredictable, their very nature evolving with time, while others have settled into a kind of statistical equilibrium. The concept of ​​stationarity​​ is our tool for describing this equilibrium. It tells us that while the individual values of the process are random and unpredictable, the underlying rules that govern their behavior are constant.

But as with many profound ideas in science, there's more than one way to look at it. We have two primary flavors of stationarity: one that is absolute and uncompromising, and another that is more practical and forgiving.

The Illusion of Sameness: Strict vs. Weak Stationarity

Let's start with the most demanding definition. We call a process ​​strictly stationary​​ if its entire statistical personality is invariant to shifts in time. What does this mean? Imagine taking a snapshot of the process at a set of time points, say (Xt1,Xt2,…,Xtn)(X_{t_1}, X_{t_2}, \dots, X_{t_n})(Xt1​​,Xt2​​,…,Xtn​​), and examining their joint probability distribution. Now, shift all your time points by some amount τ\tauτ and look at the new set (Xt1+τ,Xt2+τ,…,Xtn+τ)(X_{t_1+\tau}, X_{t_2+\tau}, \dots, X_{t_n+\tau})(Xt1​+τ​,Xt2​+τ​,…,Xtn​+τ​). If the joint probability distribution of this new set is identical to the first one, for any choice of time points and any shift τ\tauτ, then the process is strictly stationary. It's like a perfectly repeating wallpaper pattern; no matter how you slide it, the design looks the same.

A wonderfully simple, yet illustrative, example is a process where we flip a coin once at the beginning of time and set a value AAA to be +1 if it's heads and -1 if it's tails. The process is then defined as Xt=AX_t = AXt​=A for all time ttt. At any point in time, XtX_tXt​ is either +1 or -1 with equal probability. If you look at the process at time t1=10t_1=10t1​=10 and t2=50t_2=50t2​=50, the pair (X10,X50)(X_{10}, X_{50})(X10​,X50​) is just (A,A)(A, A)(A,A). If you look at it later at t1+h=100t_1+h=100t1​+h=100 and t2+h=140t_2+h=140t2​+h=140, the pair (X100,X140)(X_{100}, X_{140})(X100​,X140​) is still just (A,A)(A, A)(A,A). The underlying probability law—the coin flip—is completely independent of when you choose to observe the outcome. This process is, therefore, strictly stationary, regardless of the specific distribution of the initial random variable AAA.

This strict form of stationarity is incredibly robust. If you take a strictly stationary process XtX_tXt​ and pass it through any fixed, memoryless filter—for instance, by squaring it to create a new process Yt=Xt2Y_t = X_t^2Yt​=Xt2​—the resulting process YtY_tYt​ is also guaranteed to be strictly stationary. The time-invariance of the underlying probability law is so complete that it survives such transformations.

However, verifying strict stationarity requires knowing the full joint probability distribution for all time, a god-like perspective we rarely have for real-world data. This leads us to a more practical, and ultimately more useful, definition: ​​weak stationarity​​.

A Practical Checklist: The Three Pillars of Weak Stationarity

Instead of demanding that the entire probability distribution remains constant, weak (or covariance) stationarity asks only that the most important summary statistics—the first two ​​moments​​—are stable over time. This gives us a checklist of three conditions:

  1. ​​The mean must be constant:​​ E[Xt]=μE[X_t] = \muE[Xt​]=μ for all ttt. The process should not have a systematic trend upwards or downwards. Its central tendency is fixed.

  2. ​​The variance must be constant and finite:​​ Var(Xt)=σ2<∞\text{Var}(X_t) = \sigma^2 < \inftyVar(Xt​)=σ2<∞ for all ttt. The average spread or volatility of the process around its mean does not change over time. The "finite" part is crucial. Consider a process where each value is drawn independently from a distribution with an infinite variance, like a Student's t-distribution with two degrees of freedom. Even though the mean is constant (zero), the process is not weakly stationary because its variance is undefined or infinite. Such a process is prone to extreme, unpredictable jumps that a finite-variance process would not exhibit.

  3. ​​The autocovariance must depend only on the lag:​​ Cov(Xt,Xt+h)\text{Cov}(X_t, X_{t+h})Cov(Xt​,Xt+h​) must be a function of the lag hhh only, not the time ttt. We call this function the ​​autocovariance function​​, γ(h)\gamma(h)γ(h).

This third condition is the most subtle and powerful. It says that the relationship between two points in the process depends only on how far apart they are in time, not when they occur. The covariance between today's temperature and tomorrow's should be the same as the covariance between the temperature on July 1, 2050, and July 2, 2050.

It's easy to be fooled here. Imagine a process defined by Xt=Acos⁡(ωt)X_t = A \cos(\omega t)Xt​=Acos(ωt), where the amplitude AAA is a random variable that is +1 or -1 with equal probability. The mean of this process is E[Xt]=E[A]cos⁡(ωt)=0×cos⁡(ωt)=0E[X_t] = E[A]\cos(\omega t) = 0 \times \cos(\omega t) = 0E[Xt​]=E[A]cos(ωt)=0×cos(ωt)=0, which is constant. It passes the first test! But what about its covariance? The variance (which is the autocovariance at lag 0) is Var(Xt)=E[Xt2]−(E[Xt])2=E[A2cos⁡2(ωt)]=1×cos⁡2(ωt)\text{Var}(X_t) = E[X_t^2] - (E[X_t])^2 = E[A^2 \cos^2(\omega t)] = 1 \times \cos^2(\omega t)Var(Xt​)=E[Xt2​]−(E[Xt​])2=E[A2cos2(ωt)]=1×cos2(ωt). This variance clearly depends on time ttt, oscillating up and down. Since the variance isn't constant, the process is not weakly stationary, even though its mean is perfectly stable. This teaches us that all three pillars are necessary; a failure in one brings the whole structure down.

The Fingerprint of Time: Autocorrelation and Covariance Structure

The autocovariance function, γ(h)\gamma(h)γ(h), is like a fingerprint of a stationary process. It tells us about the process's internal "memory"—how a value at one point in time is related to values at other times.

To make it even more interpretable, we often normalize the autocovariance to get the ​​autocorrelation function (ACF)​​, denoted by ρ(h)\rho(h)ρ(h). The relationship is simple and intuitive: you just divide the autocovariance by the variance of the process: ρ(h)=γ(h)γ(0)\rho(h) = \frac{\gamma(h)}{\gamma(0)}ρ(h)=γ(0)γ(h)​ Since γ(0)=Var(Xt)\gamma(0) = \text{Var}(X_t)γ(0)=Var(Xt​), the ACF at lag 0 is always ρ(0)=γ(0)γ(0)=1\rho(0) = \frac{\gamma(0)}{\gamma(0)} = 1ρ(0)=γ(0)γ(0)​=1. For other lags, the ACF measures the correlation between points separated by hhh time steps.

These functions are not arbitrary; the definition of weak stationarity imposes strict rules on them.

  • ​​Evenness:​​ The covariance between XtX_tXt​ and Xt+hX_{t+h}Xt+h​ must be the same as between Xt+hX_{t+h}Xt+h​ and XtX_tXt​. This means γ(h)=γ(−h)\gamma(h) = \gamma(-h)γ(h)=γ(−h), so the autocovariance and autocorrelation functions must be even functions of the lag hhh. A function like 10cos⁡(h)−5sin⁡(h)10\cos(h) - 5\sin(h)10cos(h)−5sin(h) could never be a valid autocovariance function because the sine term makes it asymmetric.
  • ​​Boundedness:​​ By the Cauchy-Schwarz inequality, the absolute value of the covariance can never exceed the variance. This translates to ∣γ(h)∣≤γ(0)|\gamma(h)| \le \gamma(0)∣γ(h)∣≤γ(0), or equivalently, ∣ρ(h)∣≤1|\rho(h)| \le 1∣ρ(h)∣≤1 for all hhh. An ACF cannot have values like 1.5 or -2. A proposed function like ρ(h)=1−0.2h2\rho(h) = 1 - 0.2h^2ρ(h)=1−0.2h2 is immediately disqualified because for a large enough lag, its value will fly past -1.

When we consider a sequence of observations from a weakly stationary process, say (X1,X2,X3,X4)(X_1, X_2, X_3, X_4)(X1​,X2​,X3​,X4​), these properties give the covariance matrix a beautiful and highly structured form. The entry in the iii-th row and jjj-th column is Cov(Xi,Xj)=γ(∣i−j∣)\text{Cov}(X_i, X_j) = \gamma(|i-j|)Cov(Xi​,Xj​)=γ(∣i−j∣). Because this depends only on the difference ∣i−j∣|i-j|∣i−j∣, all the elements along any given diagonal are identical. This special type of matrix is called a ​​Toeplitz matrix​​. For a process with autocovariance γ(0)=2.5\gamma(0)=2.5γ(0)=2.5, γ(1)=1\gamma(1)=1γ(1)=1, and γ(k)=0\gamma(k)=0γ(k)=0 for k≥2k \ge 2k≥2, the covariance matrix for (X1,X2,X3,X4)(X_1, X_2, X_3, X_4)(X1​,X2​,X3​,X4​) has this elegant structure: Σ=(γ(0)γ(1)γ(2)γ(3)γ(1)γ(0)γ(1)γ(2)γ(2)γ(1)γ(0)γ(1)γ(3)γ(2)γ(1)γ(0))=(2.510012.510012.510012.5)\Sigma = \begin{pmatrix} \gamma(0) & \gamma(1) & \gamma(2) & \gamma(3) \\ \gamma(1) & \gamma(0) & \gamma(1) & \gamma(2) \\ \gamma(2) & \gamma(1) & \gamma(0) & \gamma(1) \\ \gamma(3) & \gamma(2) & \gamma(1) & \gamma(0) \end{pmatrix} = \begin{pmatrix} 2.5 & 1 & 0 & 0 \\ 1 & 2.5 & 1 & 0 \\ 0 & 1 & 2.5 & 1 \\ 0 & 0 & 1 & 2.5 \end{pmatrix}Σ=​γ(0)γ(1)γ(2)γ(3)​γ(1)γ(0)γ(1)γ(2)​γ(2)γ(1)γ(0)γ(1)​γ(3)γ(2)γ(1)γ(0)​​=​2.5100​12.510​012.51​0012.5​​ This structure is a direct visual manifestation of stationarity.

When the Rules Bend: Weak Without Being Strict

Now for a fascinating question: strict stationarity (with finite variance) implies weak stationarity, but does the reverse hold? Can a process be weakly stationary without being strictly stationary?

The answer is a resounding yes, and it reveals the true difference between the two concepts. Consider a process constructed as follows: draw a sequence of independent standard normal random variables, {ϵt}\{\epsilon_t\}{ϵt​}. If the time ttt is even, we set Xt=ϵtX_t = \epsilon_tXt​=ϵt​. If ttt is odd, we set Xt=12(ϵt−12−1)X_t = \frac{1}{\sqrt{2}}(\epsilon_{t-1}^2 - 1)Xt​=2​1​(ϵt−12​−1).

Let's check the weak stationarity conditions. One can calculate that the mean is always zero, and the variance is always one, regardless of whether ttt is even or odd. Furthermore, the autocovariance turns out to depend only on the lag hhh (in fact, it's zero for all non-zero lags). So, this process is perfectly weakly stationary.

But is it strictly stationary? Absolutely not. At even times, XtX_tXt​ is a standard normal variable, symmetric and defined on the entire real line. At odd times, however, XtX_tXt​ is related to a squared normal variable, which means it can never be less than −12-\frac{1}{\sqrt{2}}−2​1​. The very shape and support of the probability distribution of XtX_tXt​ changes depending on whether the time is even or odd. Since the one-dimensional distribution isn't constant, the joint distributions can't be either. This process is like a person who always has the same average mood (mean) and the same range of emotions (variance), but on Mondays speaks only in prose and on Tuesdays speaks only in poetry. The underlying patterns are different, even if the summary statistics are the same.

Engineering Stability: Transformations and Stationarity

Why do we care so much about stationarity? Because it represents a state of statistical predictability that makes modeling and forecasting possible. Many real-world processes, like stock prices or population levels, are clearly not stationary—they exhibit trends.

The wonderful thing is that sometimes we can perform a simple operation to induce stationarity. A common technique is ​​differencing​​. If we have a weakly stationary process XtX_tXt​, we can create a new process Yt=Xt−Xt−1Y_t = X_t - X_{t-1}Yt​=Xt​−Xt−1​. Is this new "differenced" process stationary? Let's check. The new mean is E[Yt]=E[Xt]−E[Xt−1]=μ−μ=0E[Y_t] = E[X_t] - E[X_{t-1}] = \mu - \mu = 0E[Yt​]=E[Xt​]−E[Xt−1​]=μ−μ=0. The new autocovariance, Cov(Yt,Yt−h)\text{Cov}(Y_t, Y_{t-h})Cov(Yt​,Yt−h​), can be expanded in terms of the autocovariance of XtX_tXt​. A little algebra shows that it too depends only on the lag hhh. Thus, differencing a weakly stationary process always yields another weakly stationary process.

This is more than a mathematical curiosity; it's a powerful tool. Many non-stationary processes with trends, when differenced, become stationary. This act of transformation is often the first and most critical step in taming a wild, evolving process, making it amenable to analysis and revealing the stable, underlying structure hidden within the fluctuations. It is a prime example of how understanding the deep principles of stationarity allows us not just to describe the world, but to reshape our view of it.

Applications and Interdisciplinary Connections

After our journey through the formal landscape of stationary processes, you might be left with a feeling of abstract elegance. But what is this concept for? Where does this mathematical machinery connect with the tangible world of flapping drone wings, fluctuating stock prices, and the very code of life? The truth is, stationarity is not just a mathematician's curiosity; it is a fundamental principle that allows us to model and understand a staggering variety of systems where randomness and time intersect. It is the concept that allows us to find statistical certainty in the heart of uncertainty.

The core intuition is this: a stationary process describes a system in a state of statistical equilibrium. It has "forgotten" its beginning. Whatever shocks or pushes it received in the distant past have faded away, leaving it to fluctuate in a consistent, predictable manner. Its statistical heartbeat—its mean, its variance, its rhythm of correlation—remains steady. Let’s see where this simple, powerful idea takes us.

Engineering Stability: Taming the Randomness

Imagine you are an engineer designing the control system for a micro-drone. The drone is constantly battered by tiny, random gusts of wind. Its angular position, let's call it θt\theta_tθt​ at time ttt, wobbles. Your controller's job is to nudge it back to level flight. A simple model for this behavior might look something like this: the deviation at the next moment, θt\theta_tθt​, is some fraction ccc of the current deviation θt−1\theta_{t-1}θt−1​, plus the new random gust ZtZ_tZt​. This gives us the relation θt=cθt−1+Zt\theta_t = c \theta_{t-1} + Z_tθt​=cθt−1​+Zt​.

For the drone's flight to be "stable," its wobbles must not grow out of control. The statistical properties of its deviation should be constant over time; in other words, the process {θt}\{\theta_t\}{θt​} must be stationary. When does this happen? The crucial insight comes from looking at the feedback parameter, ccc. If ∣c∣≥1|c| \ge 1∣c∣≥1, any deviation is either maintained or amplified over time. A small gust of wind has an effect that persists or even grows, and the drone's variance would explode. The system has an infinite memory of past shocks. But if ∣c∣<1|c| < 1∣c∣<1, each deviation is dampened. The system is self-correcting; it gradually "forgets" past disturbances. In this regime, the process becomes weakly stationary, fluctuating with a constant variance that depends on the feedback strength ccc and the magnitude of the random gusts. The drone is stable.

This very same logic applies far beyond engineering. An economist modeling daily changes in a commodity's price might use a similar autoregressive model, perhaps one that depends on the previous two days: Xt=ϕ1Xt−1+ϕ2Xt−2+ZtX_t = \phi_1 X_{t-1} + \phi_2 X_{t-2} + Z_tXt​=ϕ1​Xt−1​+ϕ2​Xt−2​+Zt​. Just like with the drone, the stability of the market model—whether price shocks die out or lead to explosive bubbles—depends entirely on whether the coefficients ϕ1\phi_1ϕ1​ and ϕ2\phi_2ϕ2​ correspond to a stationary process. By analyzing the mathematical properties of stationarity, economists can diagnose the stability of their models and, by extension, the markets they represent. The same mathematics that keeps a drone level helps us understand the turbulence of financial systems.

Manufacturing Stationarity: Finding the Signal in the Noise

Some of the most interesting processes in nature are not stationary. Think of a simple random walk—the path of a diffusing particle or the trajectory of a gambler's fortune. Its position wanders, and its variance grows linearly with time. It never settles down; it is fundamentally non-stationary. How can we apply our toolkit to such a system?

The trick is to realize that while the random walk itself is non-stationary, the steps it takes often are. If each step is an independent, identically distributed (i.i.d.) random variable, then the sequence of steps is a textbook example of a strict-sense stationary process. We can recover this underlying stationary "engine" from the non-stationary process through a simple operation: differencing. If we look at the change in position from one moment to the next, Xt−Xt−1X_t - X_{t-1}Xt​−Xt−1​, we are looking at the steps themselves.

This idea can be generalized. We can take a non-stationary process and, through transformations like differencing or sampling, manufacture a new process that is stationary. For example, if we take our random walk XtX_tXt​ and create a new series by looking at the total displacement every two steps, Yk=X2k−X2k−2Y_k = X_{2k} - X_{2k-2}Yk​=X2k​−X2k−2​, we find something remarkable. Each YkY_kYk​ is the sum of two consecutive (and independent) steps. Since these pairs of steps are non-overlapping, the sequence {Yk}\{Y_k\}{Yk​} turns out to be a sequence of i.i.d. variables, making it strictly stationary. This concept of transforming a non-stationary series into a stationary one is the cornerstone of modern time series analysis, forming the basis for powerful forecasting models used in fields from climatology to finance.

The Symphony of Frequencies: Stationarity in Signal Processing

So far, we have talked about stationarity in the domain of time—how correlations decay as moments become more separated. But an equally powerful perspective comes from the world of frequencies. The Wiener-Khinchin theorem tells us that for a stationary process, the autocovariance function (a measure of correlation in time) and the power spectral density (a measure of power distribution across frequencies) are Fourier transform pairs. A process that is stable in time has a well-defined and time-invariant "symphony" of constituent frequencies.

This connection allows us to understand filtering. When we pass a signal through a linear time-invariant filter—say, a low-pass filter that removes high-frequency hiss—we are essentially sculpting its power spectrum. If the input process is weakly stationary, a time-invariant filter will produce an output that is also weakly stationary. The filter doesn't care when a frequency component arrives; it treats all time points equally. It simply reshapes the balance of power among the frequencies, which results in a new, but still stationary, process with a new autocovariance function.

An even more beautiful phenomenon arises in communications. Imagine we take a stationary signal XtX_tXt​ (like a voice recording) and modulate it for radio transmission by multiplying it by a high-frequency carrier wave, cos⁡(ω0t)\cos(\omega_0 t)cos(ω0​t). The resulting signal, Xtcos⁡(ω0t)X_t \cos(\omega_0 t)Xt​cos(ω0​t), is clearly non-stationary; its power oscillates with the deterministic carrier wave. But in a real radio system, the receiver doesn't know the exact phase of the carrier wave. Let's model this uncertainty by including a random phase Φ\PhiΦ, uniformly distributed on [0,2π][0, 2\pi][0,2π]. Our signal is now Yt=Xtcos⁡(ω0t+Φ)Y_t = X_t \cos(\omega_0 t + \Phi)Yt​=Xt​cos(ω0​t+Φ). A miracle happens. When we compute the statistical properties of YtY_tYt​, we average over all possible values of this random phase. This act of averaging completely washes out the time-dependence introduced by the cosine term. The resulting process YtY_tYt​ is, perhaps surprisingly, weakly stationary!. The introduction of a specific kind of randomness has restored the statistical equilibrium.

The Stationary State of Being: From Physics to Biology

The idea of a statistical equilibrium is central to many fields of science, and stationarity is its formal language. Consider a system described by a time-homogeneous Markov process—one where the transition probabilities depend only on the time elapsed, not the absolute time. Examples are everywhere. The number of ions bound to a channel in a cell membrane can be modeled as a queue where ligands arrive and depart with constant rates. A particle hopping between vertices on a graph can be a discrete-time Markov chain.

Many such systems, if they are ergodic, will eventually settle into a "stationary distribution"—a state where the probability of being in any given configuration is constant over time. Now, here is the profound connection: if you start the system in its stationary distribution, the entire stochastic process that describes its future evolution is strict-sense stationary. Because it begins in a state of perfect statistical balance, it remains in that state forever. Every statistical property—not just the mean and variance, but the full joint distribution across any set of time points—becomes invariant to shifts in time.

This principle has deep implications. In evolutionary biology, models of nucleotide substitution are often assumed to be stationary. This means that for a given lineage, the underlying biochemical machinery of mutation is assumed to have constant statistical properties through the ages. This is a baseline assumption. Upon it, a stronger, more famous hypothesis can be built: the strict molecular clock. The clock hypothesis posits not only that each lineage's evolutionary process is stationary, but also that the rate of evolution is the same across all lineages. Stationarity does not imply a molecular clock, but the clock requires stationarity as a precondition. This distinction is vital for formulating precise, testable hypotheses about the history of life, such as using a "relative rate test" to check if two species are evolving at the same speed.

The Ghost in the Machine: Stationarity in Chaos

Perhaps the most astonishing application of stationarity lies at the boundary between the random and the determined. Consider a chaotic system, like the simple map Xt=(kXt−1)(mod1)X_t = (k X_{t-1}) \pmod 1Xt​=(kXt−1​)(mod1) for some integer k≥2k \ge 2k≥2. Given an initial value X0X_0X0​, the entire future sequence is perfectly determined. There is no external randomness. Yet, the system is chaotic: infinitesimally small differences in the starting point lead to exponentially diverging trajectories.

From the perspective of ergodic theory, such systems often possess an "invariant measure"—a probability distribution that, if used to select the initial state X0X_0X0​, is perfectly preserved by the system's evolution. For our map, this invariant measure is the uniform distribution on [0,1)[0,1)[0,1). Now, what if we embrace our ignorance and model the starting point X0X_0X0​ as a random variable drawn from this distribution? We have created a stochastic process from a deterministic machine. And the result? This process, born from chaos, is strict-sense stationary. The system's deterministic, unpredictable dance, when viewed through the lens of this special probability measure, is statistically indistinguishable from a truly random process in equilibrium. The temporal invariance of the joint probability distributions is a direct consequence of the spatial invariance of the measure under the system's dynamics.

Here we see the true unifying power of the idea. Stationarity provides a bridge between the world of stochastic processes, driven by explicit randomness, and the world of deterministic chaos, where unpredictability emerges from complex, nonlinear rules. It reveals that the statistical regularities we can count on, whether in the stability of a drone, the frequencies of a radio signal, or the emergent randomness of a chaotic system, all spring from the same fundamental principle: a system in statistical harmony with the flow of time.