try ai
Popular Science
Edit
Share
Feedback
  • Internal State Variables

Internal State Variables

SciencePediaSciencePedia
Key Takeaways
  • Internal state variables are quantifiable properties within a system that compress its entire past history, enabling the prediction of its future behavior.
  • In materials science, ISVs are essential for modeling history-dependent phenomena like plastic deformation, fatigue damage, and work hardening.
  • The evolution of internal state variables is governed by physical laws, primarily the Second Law of Thermodynamics, which ensures that models are physically realistic.
  • The concept is broadly interdisciplinary, explaining memory in systems ranging from digital circuits and non-Newtonian fluids to biological processes like epigenetics.

Introduction

Why does a bent paperclip stay bent, while a stretched rubber band snaps back? Why does pressing a remote button once turn the TV on, but pressing it again turns it off? The answer to these seemingly unrelated questions lies in a single, powerful concept: memory. Many systems, from simple electronics to complex living organisms, have outputs that depend not just on the present input, but on their entire past history. This article addresses the fundamental challenge of how to scientifically capture and model this history. It introduces the concept of ​​internal state variables​​—the hidden quantities that act as a system's memory. In the following sections, we will first explore the core principles and mechanisms of these variables, understanding how they compress history and evolve according to the laws of physics. Subsequently, we will embark on a journey across various scientific fields to witness the remarkable and unifying power of internal state variables in action, from materials science and electronics to computational modeling and biology.

Principles and Mechanisms

Imagine a simple light switch on the wall. Flick it up, the light is on. Flick it down, the light is off. The switch’s action depends only on its current position. It has no memory of how many times it's been flicked before. Now, think about the power button on your TV remote. Press it once, the TV turns on. Press it again, the TV turns off. The button’s effect depends on something you can't see: the current state of the TV. Is it on or off? The TV system has a memory. This simple distinction is the gateway to understanding one of the most powerful and unifying concepts in science and engineering: the ​​internal state variable​​.

A system like the light switch, whose output depends only on its current input, is called ​​combinational​​. In contrast, a system with memory, like the TV, is called ​​sequential​​. The "memory" isn't some vague, mystical quality. It is held in physical, quantifiable properties of the system which we call internal state variables. These variables live inside the "black box" of the system, summarizing its entire past history into a compact, usable form.

The Essence of Memory: Compressing History

What exactly is an internal state? Think of a system that is not at "initial rest"—that is, a system with some stored energy or information before you even touch it. A stretched rubber band, a charged capacitor, or a computer's RAM all contain a non-zero internal state. These states are the variables that capture everything we need to know about the system's past to predict its future.

A beautiful illustration comes from the world of digital signal processing, in the design of filters that clean up signals like audio or images. A ​​Finite Impulse Response (FIR)​​ filter is like a person with a short-term memory; its current output depends on a fixed number of recent inputs. It's straightforward. But an ​​Infinite Impulse Response (IIR)​​ filter is different. Its output depends not only on past inputs but also on its own past outputs. It's a recursive, feedback loop.

You might think that to calculate the output of such a filter, you'd need to know its entire infinite history. But here lies the magic: you don't. The influence of the entire past is perfectly compressed into a small, finite number of internal state variables—typically, just the last few input and output values. At each step, the filter uses this compact state to calculate the new output and update its state for the next step. The internal state variables are the system's essential memory, the distilled essence of its history.

Why We Need Them: When the Present Isn't Enough

In many real-world systems, the past is not just prologue; it dictates the present. Consider a sheet of metal. According to a simple, memoryless model, it will fail if the stress applied to it exceeds its intrinsic strength. But as any engineer knows, a metal component can fail from ​​fatigue​​—repeatedly applying a small stress, well below the static strength, can eventually cause it to break.

A model that only looks at the current stress, σ\boldsymbol{\sigma}σ, is blind to this history. To capture fatigue, we must introduce an internal state variable, let's call it "damage" and denote it by DDD. With each stress cycle, DDD grows a little. Our failure condition is no longer about stress being less than strength, but about damage being less than a critical value. The variable DDD remembers the cumulative effect of all past loading cycles, providing the system with the memory it needs to predict a fatigue failure.

This idea extends deep into the science of materials. When you bend a paperclip, it becomes harder to bend further. This phenomenon, known as ​​work hardening​​, is another form of material memory. The current shape of the paperclip doesn't tell the whole story. To truly know its state, we must look at its microstructure. The history of bending creates a tangled web of microscopic defects called dislocations. A physically-grounded model will use the density of these dislocations, ρ\rhoρ, as internal state variables. The value of ρ\rhoρ tells us how "hardened" the material is, a crucial piece of information that the history of deformation has imprinted onto the material's internal state.

The Rules of the Game: Evolution and Instability

Internal state variables are not static; they evolve. The rules governing this evolution are called ​​evolution laws​​ or ​​state equations​​. In digital electronics, these rules can be written down in a simple ​​transition table​​. This table is a recipe: given the system's current internal state (e.g., the values of binary variables y1y_1y1​ and y2y_2y2​) and its current external input (e.g., xxx), the table tells you what the next internal state will be.

When an input changes, the system embarks on a journey. Its internal variables begin to change, following the recipe in the transition table, until they (hopefully) reach a new configuration where they no longer need to change—a ​​stable state​​.

But what if they don't? The dynamics of state variables can be surprisingly rich and complex. Instead of settling down, a system can enter a cycle, oscillating endlessly between two or more unstable states. Even more dramatically, if multiple state variables are instructed to change at once, they engage in a ​​race condition​​. Who gets there first? Tiny, uncontrollable variations in physical properties, like the propagation delay of signals through different wires, determine the winner. Sometimes, the final stable state is the same regardless of who wins; this is a non-critical race. But in a ​​critical race​​, the final state of the system is fundamentally unpredictable—it depends on the outcome of this microscopic competition. This isn't just a theoretical curiosity; it's a major hazard in the design of high-speed asynchronous circuits, a tangible consequence of the dynamics of internal states.

The Ultimate Referee: The Second Law of Thermodynamics

The evolution laws for internal state variables cannot be arbitrary. They must conform to the fundamental laws of physics. The most important of these is the ​​Second Law of Thermodynamics​​.

Many processes involving changes in internal state—like the plastic deformation of a metal or the flow of a thick liquid—are ​​dissipative​​. They generate heat and increase the entropy of the universe. The second law, in the form of the Clausius-Duhem inequality, demands that the rate of this dissipation can never be negative. A system cannot spontaneously cool down and create order out of chaos.

This principle acts as a powerful constraint. When we propose a mathematical model for a material with internal variables, we must prove that its evolution laws will never, under any circumstances, violate the second law. This ensures our model is physically realistic. The entire framework is often built around a thermodynamic potential, like the ​​Helmholtz free energy​​ ψ\psiψ, which depends on observable variables (like strain ε\boldsymbol{\varepsilon}ε) and the internal state variables (like damage DDD or dislocation density ρ\rhoρ). The evolution laws for the ISVs are then derived in a way that guarantees the thermodynamic consistency of the whole system. Physics, not just mathematics, dictates the rules of the game.

The Art of Choosing States

If we need to model a system with memory, how do we know which internal variables to choose? Or even how many? This is where science becomes an art, guided by physics and validated by experiment.

Sometimes, a simple choice is not enough. In modeling the complex deformation of a metal crystal, just tracking the total amount of slip on each crystal plane turns out to be insufficient. This scalar value throws away crucial information about the history of rotations and the path of deformation. A richer description, using the full ​​plastic deformation gradient tensor​​ Fp\mathbf{F}_{p}Fp​ as an internal variable, is required to faithfully capture the material's memory under complex loading.

In other cases, experiments can reveal the hidden complexity of a system's internal world. Consider the ​​glass transition​​, the fascinating process where a liquid cools into a solid-like glass without crystallizing. We can model this by assuming the state of "structural disorder" is captured by a single internal variable. This simple model makes a firm prediction: a specific combination of measurable quantities, known as the ​​Prigogine-Defay ratio​​ Π\PiΠ, must equal 1. However, for most real glasses, experiments show that Π\PiΠ is closer to 2 or 3. This discrepancy is profound. Nature is telling us that our one-variable model is too simple. The complex process of structural arrest as a liquid turns to glass requires at least two or three distinct internal processes, each with its own state variable, to be described correctly.

From the flip-flops in a computer to the hardening of steel and the formation of glass, the concept of internal state variables provides a unified language to describe how systems remember their past. They are the hidden gears of the universe, ticking away according to rules dictated by the fundamental laws of nature, imprinting the arrow of time onto the very fabric of the world around us.

Applications and Interdisciplinary Connections

Having understood the principles of internal state variables—that they are the hidden gears of a system, the keepers of its memory—we can now embark on a journey to see where this powerful idea takes us. You might be surprised. The same conceptual tool that explains why a bent paperclip stays bent also sheds light on how a single fertilized egg can develop into a human being. This is the beauty of a fundamental scientific principle: it shows up everywhere, unifying seemingly disparate parts of our world. Let us take a tour of this vast landscape.

The World of Materials That Remember

Our most immediate, tangible experience with memory is in the solid materials around us. You stretch a rubber band, and it remembers its original shape, snapping back. You bend a metal spoon, and it remembers its new shape, staying bent. The difference lies in their internal states.

Consider a piece of metal. Its pristine state is a neat, orderly crystal lattice. When we apply a force, it deforms. If the force is small, the atoms are just slightly displaced from their equilibrium positions; this is elastic deformation. Remove the force, and they spring back. But if the force is large enough, something more dramatic happens. Planes of atoms begin to slip past one another, a process mediated by microscopic defects called dislocations. These dislocations move, multiply, and get tangled up. This microscopic rearrangement is irreversible. The material has acquired a permanent set. It has undergone ​​plastic deformation​​.

We cannot possibly track the position of every atom and every dislocation. It's a hopeless task. So, we do what a good physicist does: we invent a summary variable. We introduce an internal state variable called the ​​plastic strain tensor​​, εp\boldsymbol{\varepsilon}^pεp. This variable doesn't care about the details of individual dislocations; it just captures their net effect—the permanent, history-dependent part of the deformation. To predict the metal's future behavior, we need to know its current total strain, yes, but we also must know its current plastic strain, which is the memory of all the bending and stretching it has ever endured.

This idea can be refined. Some materials exhibit a more nuanced memory. If you stretch a metal bar, it becomes harder to stretch further in that same direction. But surprisingly, it becomes easier to compress in that direction! This is called the Bauschinger effect. The material doesn't just remember that it was deformed; it remembers the direction of deformation. To capture this, we need a more sophisticated internal variable: a ​​back-stress​​, α\boldsymbol{\alpha}α. You can think of this as a hidden, internal stress field, created by the history of plastic flow, that either opposes or assists the next deformation.

Memory isn't always about shape. Sometimes, it's about integrity. A material subjected to repeated loading cycles, even below its plastic limit, can begin to weaken. Microcracks and voids start to form and grow, coalescing until the material eventually fails. This process is called fatigue, or more generally, ​​continuum damage​​. Again, tracking every single microcrack is impossible. So, we define a scalar internal variable, the damage, DDD, which ranges from 000 for a pristine material to 111 for a completely failed one. As the material is loaded and unloaded, DDD accumulates, representing the irreversible degradation of its internal structure. This variable allows us to model how a material's stiffness and strength degrade over its service life, a concept absolutely critical for designing safe bridges, aircraft, and medical implants.

Memory in Fluids and Porous Media

The idea of a state that "remembers" is not confined to solids. Consider stirring a pot of honey versus a pot of paint. The honey's resistance to stirring (its viscosity) is simple—it depends only on the current speed of your spoon. But the paint is different. Stir it fast, and it seems to get thinner. Stop stirring, and it thickens again. The paint has memory.

Such fluids are called ​​non-Newtonian​​. They are often composed of long, chain-like polymer molecules. At rest, these chains are coiled up in a random tangle. When the fluid is sheared, they begin to uncoil and align with the flow, making it easier for the layers to slide past one another. This alignment is not instantaneous; it takes time. The degree of alignment of the polymer network can be described by an internal state variable, ξ\xiξ. This variable evolves according to its own dynamics, typically relaxing toward an equilibrium value over a characteristic time, τ\tauτ. The fluid's observable properties, like its viscosity, depend on this internal state ξ\xiξ, which in turn depends on the history of shearing it has experienced.

An even more subtle form of memory appears in the natural world under our feet. The ground is a porous medium, a sponge of solid particles and empty spaces. When it rains, water fills these spaces; during a dry spell, the water drains or evaporates. You might think that the amount of water the soil can hold is a simple function of the suction pressure pulling the water out. But it's not. For the same suction pressure, a soil that is in the process of drying holds more water than the same soil in the process of wetting. This phenomenon is called ​​hysteresis​​.

The reason lies in the complex geometry of the pore spaces and the physics of surface tension. The way pores fill and empty depends on the size of the "throats" connecting them, and a pore might fill at a different pressure than it empties. The overall state of saturation, SSS, therefore depends on the history of wetting and drying reversals. To model this, we must introduce internal variables that store this history. This could be a list of the past "reversal points" in pressure, or it could be modeled with a more abstract mathematical machine like a ​​Preisach operator​​, which is essentially a collection of a vast number of simple hysteretic switches. This memory is crucial for predicting groundwater flow, contaminant transport, and agricultural water availability.

The World of Electronics: State as a Purpose

In all the examples so far, memory has been an inherent, and sometimes inconvenient, property of a physical system. We now turn to a domain where memory is not a side effect but the entire purpose of the design: digital electronics.

What is a computer's memory? It is, in essence, a vast, meticulously engineered collection of internal state variables. A single bit of memory is stored in a circuit called a flip-flop, whose output (a voltage representing a '0' or '1') depends not just on its current inputs, but on its past inputs. It has a state.

Consider designing a simple 2-bit counter that cycles through the sequence 00→10→01→11→00…00 \to 10 \to 01 \to 11 \to 00 \ldots00→10→01→11→00… on each rising edge of a clock signal. How many internal state variables does it need? The output itself has four states, but to distinguish between being in state 000000 and waiting for the clock to go high, versus being in state 000000 just after the clock has gone high, requires additional memory. In fact, to make the circuit work reliably, we need to define at least eight distinct internal states to navigate the full cycle of inputs and outputs. This implies a minimum of ⌈log⁡2(8)⌉=3\lceil \log_{2}(8) \rceil = 3⌈log2​(8)⌉=3 binary internal state variables are needed to build the counter's "brain" and allow it to remember where it is in the sequence. Here, the internal states are not a summary of microscopic chaos; they are the discrete, logical embodiment of information itself.

This perspective helps us understand problems in modern devices. ​​Perovskite solar cells​​ are a promising new technology, but they suffer from a bizarre problem: their measured current-voltage curve depends on the direction and speed of the measurement sweep. This is a form of hysteresis, and it complicates the evaluation of their true efficiency. The cause? The perovskite material contains mobile ions that are slow to respond to changes in the electric field. These migrating ions and the charges that get trapped at interfaces act as unwanted internal state variables, each with its own relaxation timescale, τi\tau_iτi​ and τt\tau_tτt​. When the voltage is swept quickly, these slow variables can't keep up with their equilibrium values, creating a lag that makes the measured current dependent on the scan's history. Understanding the solar cell as a system with internal state variables is the key to diagnosing and potentially fixing this instability.

The Computational Universe: Building Worlds with Memory

How do we take these rich physical theories and turn them into predictive tools for science and engineering? We build computational models. The concept of internal state variables is the fundamental bridge that allows us to do this.

The ​​Finite Element Method (FEM)​​ is the workhorse of modern engineering simulation. When an engineer wants to simulate a car crash or the stress on a turbine blade, they use FEM. The software breaks the object down into a mesh of tiny "elements." For each element, at each point of numerical integration (a "Gauss point"), the computer must solve the equations of the material's behavior. If the material is path-dependent—like the plastic metal or the damaged composite we discussed earlier—the program must store and update the set of internal state variables, αg\boldsymbol{\alpha}_gαg​, at that specific point.

During a simulation, as the model deforms, the computer feeds the strain at each Gauss point into a local constitutive routine. This routine, using the old values of the internal variables as its memory, calculates the new stress and the new values of the internal variables. This local state information is then used to assemble the global picture of the object's response. This intricate algorithmic dance, repeated at millions of points and thousands of time steps, is how we simulate history-dependent behavior. Without the formal concept of ISVs, these simulations would be impossible.

The idea can even be found at more abstract levels of computation. In some numerical techniques for solving Maxwell's equations, like the ​​Transmission Line Matrix (TLM) method​​, the algorithm itself is formulated using "internal auxiliary states." These are not necessarily direct representations of a physical memory, but rather computational constructs that make the algorithm work. Cleverly, these internal states can sometimes be algebraically "condensed" out of the final equations, leading to a more efficient algorithm that uses less computer memory, albeit sometimes at the cost of a slight reduction in accuracy at fine scales.

The Ultimate Complex System: Life Itself

We have traveled from metals to microchips. Our final stop is the most complex and fascinating system of all: the living organism.

How does a caterpillar know to become a butterfly? The process of ​​metamorphosis​​ is a magnificent, pre-programmed sequence. It is orchestrated by hormonal signals, but the organism must "know" its current developmental stage to respond correctly. We can create simple ​​Boolean network models​​ to capture this logic. In such a model, genes or entire developmental programs are represented as nodes that can be ON or OFF. The state of these nodes—a set of internal state variables—represents the organism's memory of its developmental progress. The presence of juvenile hormone might act as a gate, keeping the "adult" program switched OFF, even as molting signals arrive. Only when the juvenile hormone disappears can the molting signal finally flip the switch that initiates the pupal stage, an irreversible step stored in the network's state.

Perhaps the most profound application of the ISV concept in biology is in the field of ​​epigenetics​​. Every cell in your body—a neuron, a muscle cell, a skin cell—contains the exact same DNA sequence, the same genotype ggg. So what makes them different? The answer is cellular memory, encoded in a layer of information on top of the DNA.

The DNA in our cells is wrapped around proteins, and both the DNA and the proteins can be decorated with chemical tags. These patterns of tags, called epigenetic modifications, do not change the DNA sequence itself. Instead, they control which genes are accessible to be read and which are silenced. A neuron has the "neuron genes" switched ON and the "muscle genes" switched OFF. A muscle cell has the reverse.

These epigenetic patterns are the cell's internal state variables, s(t)s(t)s(t). They are established during embryonic development in response to environmental cues, e(t)e(t)e(t), and the underlying genotype, ggg. Crucially, they are stable and can be passed down through cell division. When a skin cell divides, it produces two new skin cells because it passes on its epigenetic memory. This is what allows a complex organism to maintain its structure. The final phenotype, P(t)P(t)P(t), is therefore not just a function of genotype and environment, f(g,e(t))f(g, e(t))f(g,e(t)), but is mediated by this rich, dynamic layer of internal states: P(t)=f(g,e(t),s(t))P(t) = f(g, e(t), s(t))P(t)=f(g,e(t),s(t)).

From a bent paperclip to the identity of a living cell, the concept of an internal state variable—a hidden piece of information that carries the memory of the past—proves to be an astonishingly universal and powerful idea. It is a key that unlocks a deeper understanding of the complexity and unity of the world around us.