try ai
Popular Science
Edit
Share
Feedback
  • Non-Conservation of Energy

Non-Conservation of Energy

SciencePediaSciencePedia
Key Takeaways
  • The apparent "non-conservation" of mechanical energy is its irreversible transformation into disordered thermal energy through mechanisms like friction and drag.
  • Non-conservative forces, which are often dependent on velocity, are mathematically responsible for dissipating energy from a system, as modeled by equations like the Duffing equation.
  • A deeper reason for non-conservation arises from a lack of time-translation symmetry, where a system's governing laws explicitly change with time, a principle explained by Noether's Theorem.
  • Energy dissipation is a crucial process across disciplines, driving phenomena from heat loss in electronics and turbulence in fluids to the evolution of stars and black holes.

Introduction

We are taught from a young age that energy is always conserved—a fundamental and unbreakable law of the universe. Yet, our daily experience seems to present a constant contradiction. A bouncing ball never returns to its starting height, a pendulum's swing inevitably ceases, and a hot cup of coffee always cools to room temperature. This apparent paradox raises a crucial question: if energy is never truly lost, where does it go? This article tackles this very question, revealing that what we perceive as energy "loss" is actually a profound and ubiquitous process of energy transformation. It's the story of how useful, ordered energy degrades into disordered, dissipated forms like heat.

In the chapters that follow, we will first explore the core "Principles and Mechanisms" behind this phenomenon, from the microscopic origins of friction to the deep connection between conservation laws and the symmetries of time itself. We will then journey through a diverse range of "Applications and Interdisciplinary Connections," discovering how this energy transformation governs everything from the efficiency of our electronics to the violent and beautiful evolution of stars and black holes.

Principles and Mechanisms

We all learn in school that energy is conserved. It’s a sacred law, a cornerstone of physics. And yet, our everyday experience seems to shout defiance at this principle. A bouncing ball, dropped from your hand, never quite returns to the same height. A pendulum, given a push, eventually stills. A satellite, grazing the upper atmosphere, will one day fall from the sky. If energy is so conserved, where does it all go?

The truth, as it so often is in physics, is both subtle and beautiful. The law isn't wrong, but our initial interpretation of it is often too narrow. What we are witnessing in these examples is not the destruction of energy, but its transformation. It is a metamorphosis from a useful, ordered form of mechanical energy—the coherent motion of the ball as a whole—into a useless, disordered form: the chaotic jiggling of countless individual atoms. This chapter is about the principles and mechanisms of this great conversion, the story of how and why mechanical energy is so often "lost."

The Great Conversion: From Order to Disorder

Let’s return to that bouncing ball. Imagine dropping a highly elastic ball in a vacuum onto a hard plate. It bounces, but to a lower height. Some of its initial potential energy is gone. Why? The collision is the culprit. As the ball hits the plate, it squashes. The macroscopic kinetic energy it had just before impact is converted into elastic potential energy, like compressing a spring. If the ball were a perfect spring, this process would be perfectly reversible. All the stored elastic energy would push back, converting itself back into kinetic energy, and the ball would rebound with the same speed it had on arrival.

But real materials are not perfect springs. They are complex assemblies of molecules. As the ball's material deforms and then relaxes, its long polymer chains or crystalline structures rub and slide against one another. This is a kind of internal friction. This microscopic rubbing generates heat, converting the ordered energy of the ball's motion into the disordered, random kinetic energy of its constituent atoms. By definition, this increase in microscopic random energy is an increase in the ball's ​​internal thermal energy​​. The ball gets slightly warmer after the bounce.

This process is fundamentally ​​irreversible​​. You cannot cool the ball by a fraction of a degree and expect that thermal energy to spontaneously organize itself into a coherent upward leap. The energy is still there, conserved in the universe as a whole, but it has been degraded from a useful, macroscopic form to a dissipated, microscopic one.

To get a better mental picture, physicists often use simple conceptual models. The ​​Maxwell model​​ for a viscoelastic material imagines it as a combination of a perfect spring and a perfect "dashpot" connected in series. A dashpot is like a syringe filled with thick oil; it resists motion. The spring represents the material's ability to store energy elastically and return it. The dashpot represents the material's viscous, fluid-like properties—the internal friction. When you deform the Maxwell model, the spring stores energy, but any movement of the dashpot does work against viscosity, dissipating that energy as heat. In a full cycle of stretching and relaxing, the spring gives back all the energy it took, but the energy lost in the dashpot is gone for good. That lost energy is what stops the ball from bouncing forever.

The Mathematical Anatomy of Non-Conservation

Physics thrives on translating such pictures into precise mathematical language. If some forces, like gravity or an ideal spring force, are "conservative," then what do we call these forces that cause dissipation? They are, fittingly, ​​non-conservative forces​​. The most common signature of a dissipative force is that it depends on velocity. Think of air resistance; the faster you move, the harder it pushes back.

A beautiful and comprehensive picture of this is given by the ​​Duffing equation​​, a famous equation that models a wide range of oscillators, from a swaying building to a complex electrical circuit. A common form of the equation is:

md2xdt2+δdxdt+αx+βx3=γcos⁡(ωt)m\frac{d^2x}{dt^2} + \delta \frac{dx}{dt} + \alpha x + \beta x^3 = \gamma \cos(\omega t)mdt2d2x​+δdtdx​+αx+βx3=γcos(ωt)

Let's dissect this equation, for it tells a complete story. On the left, we have four terms. md2xdt2m\frac{d^2x}{dt^2}mdt2d2x​ is Newton's familiar "mass times acceleration." The term αx+βx3\alpha x + \beta x^3αx+βx3 represents the restoring force, the part that tries to pull the system back to equilibrium (like a spring, but a nonlinear one). These first two parts, inertia and restoring force, are the heart of the conservative system.

The other two terms are what make the energy non-conserved. The term δdxdt\delta \frac{dx}{dt}δdtdx​ is the ​​damping term​​. It's a force proportional to velocity (dxdt\frac{dx}{dt}dtdx​), just like our simple model of friction. On the right, the term γcos⁡(ωt)\gamma \cos(\omega t)γcos(ωt) is the ​​driving term​​, an external force that cyclically pushes and pulls on the system.

Now, let's look at the system's mechanical energy, E=Kinetic+PotentialE = \text{Kinetic} + \text{Potential}E=Kinetic+Potential. If we calculate how this energy changes with time, the equation of motion gives us a wonderfully clear result:

dEdt=−δ(dxdt)2+γdxdtcos⁡(ωt)\frac{dE}{dt} = - \delta \left(\frac{dx}{dt}\right)^2 + \gamma \frac{dx}{dt} \cos(\omega t)dtdE​=−δ(dtdx​)2+γdtdx​cos(ωt)

Look at what this tells us! The damping term's contribution to the energy change is −δ(dxdt)2-\delta (\frac{dx}{dt})^2−δ(dtdx​)2. Since δ\deltaδ is positive and the velocity squared is always non-negative, this term is always less than or equal to zero. It relentlessly drains energy from the system, turning it into heat. This is the mathematical signature of dissipation. In contrast, the driving term's contribution can be positive or negative, depending on whether the driving force is pushing with the motion or against it. It is the energy source, pumping energy into the system. The fate of the oscillator—whether it winds down, blows up, or settles into a steady pattern—is determined by the tug-of-war between the energy-draining damper and the energy-supplying driver.

The exact form of the damping can vary. For a pendulum swinging in air, the drag force might be better modeled as being proportional to the velocity squared, Fd∝−v2F_d \propto -v^2Fd​∝−v2. In other physical systems, it might even be proportional to the velocity cubed, Fd∝−v3F_d \propto -v^3Fd​∝−v3. While the mathematical details of calculating the energy loss per cycle change, the core physical principle is identical: a velocity-dependent force does work on the system, and this work is what we call dissipation.

The Symmetry Connection: A Deeper Principle

So far, we've discussed non-conservation arising from explicit dissipative forces like friction and drag. But there is a deeper, more elegant reason why energy may not be conserved, one that is tied to the very fabric of time itself.

Consider a simple pendulum, but instead of hanging from a fixed pivot, imagine the pivot point is being forced to oscillate back and forth by some external machine. There is no air drag or internal friction in this idealized problem, yet the mechanical energy of the pendulum bob is not conserved. Why? Because the machine moving the pivot is constantly doing work on the pendulum, sometimes adding energy, sometimes removing it. The system is no longer isolated.

The powerful framework of Lagrangian mechanics gives us a profound insight into this. It tells us to look for ​​symmetries​​. One of the most beautiful results in physics, ​​Noether's Theorem​​, establishes a direct link between the symmetries of a system and its conservation laws. The relevant symmetry for energy conservation is ​​time-translation invariance​​. This is a fancy way of asking: "If I run an experiment today, and then run the exact same experiment tomorrow, will I get the same result?" If the answer is yes, the laws governing the system are time-invariant, and energy is conserved.

In the case of the oscillating pivot, the system is not time-invariant. The position of the pivot explicitly depends on time, xs(t)=Acos⁡(ωt)x_s(t) = A \cos(\omega t)xs​(t)=Acos(ωt). The "rules of the game" for the pendulum are changing from moment to moment. Because the system's definition has an explicit dependence on time, its energy is not conserved.

This principle extends far beyond simple mechanics. In modern physics, which describes the universe in terms of fields, the same idea holds. Imagine a universe described by a field ϕ\phiϕ whose dynamics are governed by a potential VVV. If that potential has an explicit time dependence, say the "mass" of a particle can change over time, V(ϕ,t)V(\phi, t)V(ϕ,t), then the total energy of this field is not conserved. In fact, the rate at which the total energy of the entire universe changes is given by a breathtakingly simple formula: it's the spatial integral of the rate at which the potential itself is changing with time, ∂V∂t\frac{\partial V}{\partial t}∂t∂V​. If the fundamental laws are static, energy is conserved. If the laws themselves evolve, energy is not.

When One Law Fails, Others May Follow

The consequences of breaking a conservation law can cascade. Let's revisit our satellite grazing the upper atmosphere. We know the atmospheric drag is a non-conservative force, so the satellite loses mechanical energy. Its orbit will decay, and it will spiral inwards.

But there's more to the story. The gravitational force from the planet is a ​​central force​​—it always points directly towards the center of the planet. For central forces, another quantity is conserved: ​​angular momentum​​. It's the conservation of angular momentum that keeps a planet in a stable, elliptical orbit.

However, the drag force is not a central force. It always points in the direction exactly opposite to the satellite's velocity. This means the drag force can exert a ​​torque​​ on the satellite relative to the planet's center. A torque changes angular momentum. So, for the satellite experiencing atmospheric drag, both energy and angular momentum fail to be conserved.

This has a drastic effect on how we can analyze the problem. For pure central-force motion, physicists use a powerful trick called the ​​effective potential​​. By using the fact that angular momentum LLL is constant, we can reduce a complex two-dimensional orbital problem into a simple one-dimensional problem of a particle moving in a fixed potential well. But this entire method hinges on LLL being a constant number. As soon as drag is introduced, LLL starts to change with time, and the entire elegant structure of the effective potential collapses. The problem can no longer be simplified. This is a stark reminder that the powerful tools of physics are often built upon a foundation of conservation laws. When that foundation is compromised, the tools may fail us.

A Modern Coda: The Ghost in the Machine

Let's end our journey in the world of computational science. In a computer, we can create ideal virtual worlds. We can simulate a box of atoms with perfectly conservative forces, a system where the total energy should be absolutely constant. This is called a ​​microcanonical (NVE) ensemble​​ simulation, and it is a workhorse of computational chemistry and materials science.

But a strange thing often happens. A researcher runs their NVE simulation, designed to be perfectly energy-conserving, and finds that the total energy of the system is slowly, but systematically, drifting upwards or downwards over millions of time steps. Is this a new physical phenomenon? A violation of the laws of thermodynamics discovered in silicon?

The answer is no. It is a "ghost in the machine"—a ​​numerical artifact​​. The algorithms used to integrate the equations of motion work by taking tiny, discrete steps in time. These methods are approximations, and they can introduce tiny errors. An integration time step that is too large, or an algorithm for handling constraints on bond lengths that isn't sufficiently accurate, can introduce a systematic bias. This bias, though minuscule in any single step, accumulates over the long run, causing the total energy to drift away from its true, conserved value.

This provides a final, crucial lesson. A systematic drift in energy in a simulation that should be conservative is a red flag. It tells the scientist that their simulation is unphysical and the results are not reliable. Therefore, understanding the deep principles of energy conservation is not just about understanding the natural world; it is an essential tool for validating the virtual worlds we create to model it. It helps us distinguish the true physics of the system from the ghosts in our machines.

Applications and Interdisciplinary Connections

In our previous discussions, we explored the foundational principles of energy conservation, a cornerstone of physics. We treat it as a sacred law, and for a closed system, it is. But if you look around, you see a world that seems to delight in violating it. A bouncing ball eventually comes to rest. A hot cup of coffee cools down. A spinning top wobbles and falls. Is the law of energy conservation wrong? Of course not. The energy is not destroyed; it has merely been shuffled off, transformed into a less useful, more disorderly form, typically heat, or radiated away into the void.

The "non-conservation" of the useful energy within a system is not a failure of physics but one of its most fascinating and practical aspects. This is the realm of friction, dissipation, and irreversible processes. Understanding where the energy goes is just as important as knowing that, for the universe as a whole, it is never truly lost. This chapter is a journey through the myriad ways energy makes its escape, from the engineered world of our own making to the grand, cosmic theater of stars and black holes.

The Engineer's Struggle: Taming Unwanted Heat

If you have ever felt a power adapter or a laptop charger get warm, you have experienced energy non-conservation firsthand. The goal is to transfer electrical energy, but some of it inevitably leaks out as heat. This leakage is a constant battle for engineers, and its sources are subtle and varied.

Consider the heart of the electrical grid: the transformer. Its job is to change voltage levels with minimal loss. Inside every large transformer is a core made of a magnetic material. As alternating current (AC) flows, it magnetizes the core back and forth, dozens of times per second. Each time the magnetic field reverses, the microscopic magnetic "domains" within the material must physically reorient themselves. This process isn't perfectly fluid; it has a kind of internal friction. Work must be done to flip these domains, and that work is dissipated as heat. This effect is called ​​magnetic hysteresis​​. A material that is hard to magnetize and demagnetize (a "hard" magnetic material) will have a wide hysteresis loop and waste enormous amounts of energy. This is why engineers have developed "soft" magnetic materials, like special silicon-steel alloys, which have very narrow hysteresis loops, minimizing this loss and preventing the transformer from overheating.

But hysteresis is not the only culprit. The changing magnetic field also induces small, swirling electrical currents within the core itself—so-called ​​eddy currents​​. These currents flow through the resistive material of the core and generate heat, just like the element in a toaster. To combat this, engineers laminate the core, building it from thin sheets insulated from one another to break up the paths for these currents. More advanced analysis, crucial for high-frequency electronics, even separates the total energy loss into distinct components: the static hysteresis loss (WhW_hWh​), the classical eddy current loss which is proportional to frequency (Wc∝fW_c \propto fWc​∝f), and a more mysterious "anomalous loss" often found to scale with the square root of frequency (Wa∝f1/2W_a \propto f^{1/2}Wa​∝f1/2). By carefully measuring the total loss at different frequencies, engineers can solve for each contribution and gain deep insight into the material's behavior, allowing them to design ever more efficient devices.

Sometimes, energy loss occurs even in seemingly "ideal" situations. Imagine an oscillating circuit made of a perfect inductor and a perfect capacitor, with energy sloshing back and forth between them forever. Now, at the exact moment the capacitor is fully charged, we connect a second, uncharged capacitor in parallel. The charge will rapidly redistribute itself between the two capacitors until their voltages are equal. But in this process, something remarkable happens: some of the total electrical energy vanishes. Where did it go? The rapid redistribution of charge requires a transient, high-current pulse. Even in wires with near-zero resistance, this pulse radiates away a tiny amount of electromagnetic energy or, more practically, dissipates heat due to the wires' residual resistance. Any sudden, irreversible rearrangement in a system exacts an energetic toll. Nature, it seems, charges a fee for such abrupt changes.

The Roar of the River and the Whisper of the Wind: Dissipation in Fluids

Energy dissipation is not confined to the solid state; it is the very soul of motion in fluids like water and air. While the smooth, idealized flow of a "perfect" fluid conserves mechanical energy, real fluids are messy, viscous, and turbulent.

Anyone who has seen water flowing over a dam or in a steep channel may have witnessed a ​​hydraulic jump​​. This is a dramatic and beautiful phenomenon where a fast-moving, shallow stream of water abruptly slows down and becomes deep and turbulent. The water's initial high kinetic energy seems to partly disappear. The law of energy conservation (in its simple mechanical form, the Bernoulli equation) fails spectacularly across the jump. However, momentum is still conserved. By applying the law of conservation of momentum, we can predict the change in water depth perfectly. And with that, we can calculate precisely how much mechanical energy was "lost." It was converted into the chaotic, churning motion of turbulence within the jump, which, through the fluid's viscosity, ultimately cascades down to the molecular level and warms the water ever so slightly.

This cascade is the essence of turbulence. It is a hierarchy of dissipation. Large-scale motions, or eddies, are unstable and break down into smaller eddies. These smaller eddies, in turn, break into even smaller ones. This process continues until the eddies become so small that the fluid's viscosity—its internal friction—can effectively grab hold of them and smear out their kinetic energy into heat. The great physicist Andrei Kolmogorov proposed that in well-developed turbulence, there is a characteristic smallest length scale, now called the ​​Kolmogorov length scale​​ (η\etaη), where this final act of dissipation occurs. This scale is determined by just two quantities: the viscosity of the fluid (ν\nuν) and, crucially, the rate at which energy is being supplied to the turbulence at the large scales and cascading down, known as the dissipation rate (ϵ\epsilonϵ). The entire chaotic dance of a turbulent fluid, from a stormy sky to a churning river, is orchestrated by this constant draining of energy from ordered motion into disordered heat.

The Cosmic Stage: Energy Loss in the Heavens

On the vast stage of the cosmos, energy dissipation and non-conservation sculpt the evolution of stars, power exotic celestial objects, and even govern the dance of black holes. Here, energy is often not just turned into heat but is lost by being radiated away into the universe.

A high-energy charged particle, like an electron from a cosmic ray, cannot travel through matter without losing energy. As it flies past atomic nuclei, the strong electric field of the nucleus deflects the electron, causing it to accelerate. And as we know from electromagnetism, any accelerating charge radiates. This "braking radiation," or ​​bremsstrahlung​​, consists of photons (light) that carry energy away, causing the electron to slow down. For highly relativistic electrons, this process is incredibly efficient. A remarkable feature is that the fractional energy loss per distance traveled becomes nearly independent of the electron's energy. Whether it has an energy of 1 GeV or 10 GeV, it loses roughly the same fraction of its energy in a given length of material.

Particles can also lose energy by interacting with light itself. The universe is filled with low-energy photons, most notably the Cosmic Microwave Background radiation left over from the Big Bang. A high-energy proton or electron moving through this sea of photons can collide with one of them. In the particle's reference frame, the low-energy photon is Doppler-shifted to a much higher energy. The scattering event then kicks this photon to an even higher energy in the lab frame. This process, known as ​​Inverse Compton scattering​​, transfers energy from the relativistic particle to the photon. The particle slows down, having lost energy, while a new high-energy gamma-ray is created. This mechanism is a fundamental way that cosmic rays lose energy as they propagate through the galaxy and is a primary source of the high-energy gamma-rays we observe from astrophysical objects like quasars.

Perhaps the most dramatic form of energy loss occurs in the final moments of a massive star's life. In the incredibly hot and dense stellar core, the energy density is so high that electron-photon interactions can spontaneously create pairs of neutrinos and anti-neutrinos (e−+γ→e−+ν+νˉe^- + \gamma \to e^- + \nu + \bar{\nu}e−+γ→e−+ν+νˉ). Unlike photons, which are trapped in the opaque plasma, neutrinos interact so weakly with matter that they fly straight out of the core and escape the star, carrying energy with them. This process acts as a catastrophic energy leak. It robs the core of the thermal pressure that supports it against its own immense gravity, accelerating its collapse and triggering the final, spectacular explosion of a core-collapse supernova. Here, energy non-conservation isn't a nuisance; it's the trigger for one of the most violent events in the universe.

Finally, we come to the most profound and mind-bending example of dissipation, one that involves the very fabric of spacetime. According to the "membrane paradigm" of black hole physics, the event horizon—the point of no return—can be treated as a physical membrane with surprising properties, including electrical resistance and viscosity. Now, imagine a black hole in a binary system with a companion star. The star's gravitational pull creates tidal bulges on the black hole's horizon, much like the Moon creates tides on Earth. As the black hole rotates or the star orbits, these bulges are dragged across the horizon. The horizon's "viscosity" creates a kind of friction, which dissipates energy as heat. This dissipated energy is drawn directly from the orbital energy of the binary system. The result is that the orbit slowly decays, with the two objects spiraling closer together. This process, known as ​​tidal heating​​, is a form of spacetime friction, a way for a system to lose mechanical energy through purely gravitational interactions.

From the warmth of a wire to the inspiral of a black hole, the story of non-conserved energy is the story of action, change, and evolution. It is the universe's tax on every process, the force that drives systems towards equilibrium, and the mechanism that paints the rich, dynamic, and irreversible world we see around us.