try ai
Popular Science
Edit
Share
Feedback
  • Mott's Law of Variable-Range Hopping

Mott's Law of Variable-Range Hopping

SciencePediaSciencePedia
Key Takeaways
  • Mott's law describes electrical conductivity in disordered materials where electrons hop between localized states by optimizing between tunneling distance and energy cost.
  • The law predicts that conductivity depends on temperature as exp⁡[−(T0/T)p]\exp[-(T_0/T)^p]exp[−(T0​/T)p], where the exponent p=1/(d+1)p=1/(d+1)p=1/(d+1) reveals the effective dimensionality of the electron's motion.
  • At very low temperatures, Coulomb repulsion between electrons modifies the theory, leading to the Efros-Shklovskii model with a universal exponent of p=1/2p=1/2p=1/2, regardless of dimension.
  • The principle of variable-range hopping is broadly applicable, explaining transport in diverse systems like doped semiconductors, conducting plastics, and quasiparticles in the fractional quantum Hall effect.

Introduction

In the perfectly ordered world of a crystal, electrons can move freely, leading to metallic conduction, or are stopped by a definitive energy gap, creating an insulator. But what happens in the messy, disordered systems that are far more common in nature, from doped semiconductors to amorphous plastics? In these materials, electrons can become trapped in localized energy pockets, a phenomenon known as Anderson localization. This raises a fundamental question: how can an electric current flow at all if electrons are stuck? The answer lies not in free-flowing movement, but in a subtle quantum leap known as hopping.

This article delves into the elegant theory of variable-range hopping, which resolves the puzzle of conduction in such disordered insulators. We will uncover the foundational principles governing an electron's choice of hop, a delicate balance between tunneling distance and energy cost. The first chapter, "Principles and Mechanisms," will guide you through the derivation of Sir Nevill Mott's celebrated law, revealing how conductivity measurements can uniquely probe the dimensionality of a system. We will also explore crucial extensions to the theory, accounting for crossovers and the profound effects of electron-electron interactions. Following this theoretical foundation, the second chapter, "Applications and Interdisciplinary Connections," will demonstrate the remarkable predictive power and universality of hopping physics, showcasing its role in technologies like silicon chips and flexible displays, and its relevance to frontier research in condensed matter physics.

Principles and Mechanisms

Imagine trying to navigate a vast, rocky landscape in the dark. You're stuck in a small hollow. A short leap to the next rock might be easy, but that rock could be perilously high. A distant rock might be at the same level as you, but the leap is too far to risk. This is the dilemma faced by an electron in a disordered, insulating material. Unlike in a perfect crystal where electrons can sail smoothly through repeating atomic lattices (or are stopped by a large, uniform energy gap), a disordered material presents a rugged energy landscape. Here, electron wavefunctions are no longer spread out but are "stuck" or ​​localized​​ in pockets of low potential energy. These materials are ​​Anderson insulators​​, and for an electric current to flow, electrons must hop from one localized state to another.

But how does an electron choose its path? This question leads us to one of the most elegant concepts in the physics of disordered systems: ​​variable-range hopping​​.

The Art of the Optimal Hop

At absolute zero temperature, our electron is truly trapped. It has no thermal energy to help it move. As we warm the material up, however, the crystal lattice begins to vibrate, creating quantized vibrations called ​​phonons​​. An electron can absorb a phonon, gain a bit of energy, and use it to "hop" to a vacant state nearby.

The probability of such a hop is a game of two competing factors. First, there's the quantum-mechanical ​​tunneling​​ probability. Localized electron wavefunctions decay exponentially with distance, much like the sound from a bell fades as you walk away. For an electron to hop a distance RRR, its wavefunction must overlap with the destination state. This overlap decreases as exp⁡(−2R/ξ)\exp(-2R/\xi)exp(−2R/ξ), where ξ\xiξ is the ​​localization length​​, a measure of how spread-out the electron's wavefunction is. A smaller ξ\xiξ means a more tightly trapped electron.

Second, there is the energy cost. Hopping to a state with a higher energy ΔE\Delta EΔE requires absorbing a phonon of at least that energy. At a low temperature TTT, high-energy phonons are rare. The probability of finding the required phonon is governed by the Boltzmann factor, exp⁡(−ΔE/kBT)\exp(-\Delta E / k_B T)exp(−ΔE/kB​T), where kBk_BkB​ is the Boltzmann constant.

Combining these two factors, the total hopping probability is dominated by the term:

Phop∝exp⁡(−2Rξ−ΔEkBT)P_{hop} \propto \exp\left(-\frac{2R}{\xi} - \frac{\Delta E}{k_B T}\right)Phop​∝exp(−ξ2R​−kB​TΔE​)

Nature, in its relentless quest for efficiency, will favor the hop that maximizes this probability. This is equivalent to minimizing the exponent, which represents the total "difficulty" of the hop. The electron must find the perfect compromise: a hop that is not too far in distance, nor too costly in energy. This is not necessarily a hop to the nearest neighboring site; it might be a longer leap to a more energetically favorable location. This is why we call it ​​variable-range hopping (VRH)​​.

Unveiling Mott's Law

To find this "optimal hop," we need to understand the relationship between hopping distance RRR and the likely energy cost ΔE\Delta EΔE. Imagine you are the electron. The further you are willing to search (a larger radius RRR), the more localized states you survey. In a ddd-dimensional space, the "volume" you survey is proportional to RdR^dRd. If we assume there is a roughly constant density of available states per unit volume and unit energy, which we'll call gFg_FgF​, then the total number of states within a distance RRR and an energy window ΔE\Delta EΔE is proportional to gFRdΔEg_F R^d \Delta EgF​RdΔE. The typical energy difference to find at least one state is found by setting this number to one, which gives us a crucial relationship:

ΔE≈1gFRd\Delta E \approx \frac{1}{g_F R^d}ΔE≈gF​Rd1​

The energy cost decreases rapidly as the hopping range increases! Now we can write the hopping difficulty purely in terms of the distance RRR:

S(R)=2Rξ+1gFkBTRdS(R) = \frac{2R}{\xi} + \frac{1}{g_F k_B T R^d}S(R)=ξ2R​+gF​kB​TRd1​

Finding the optimal hop is now a straightforward calculus problem: we find the value of RRR that minimizes this function. A little bit of differentiation reveals that the optimal hopping distance, RoptR_{opt}Ropt​, depends on temperature as Ropt∝T−1/(d+1)R_{opt} \propto T^{-1/(d+1)}Ropt​∝T−1/(d+1). As the temperature drops, the electron is willing to "look" further and further to find a hop with a tiny energy cost.

When we plug this optimal distance back into the expression for the difficulty SSS, we find that the minimum difficulty, SminS_{min}Smin​, is proportional to T−1/(d+1)T^{-1/(d+1)}T−1/(d+1). Since the electrical conductivity, σ\sigmaσ, is proportional to the hopping probability, exp⁡(−Smin)\exp(-S_{min})exp(−Smin​), we arrive at the celebrated ​​Mott's law​​:

σ(T)=σ0exp⁡[−(T0T)p],withp=1d+1\sigma(T) = \sigma_0 \exp\left[ - \left(\frac{T_0}{T}\right)^p \right], \quad \text{with} \quad p = \frac{1}{d+1}σ(T)=σ0​exp[−(TT0​​)p],withp=d+11​

This is a truly remarkable result. It predicts a very specific, non-Arrhenius temperature dependence. For a typical three-dimensional material (d=3d=3d=3), the exponent is p=1/4p=1/4p=1/4. For a two-dimensional sheet like graphene, it's p=1/3p=1/3p=1/3. For a one-dimensional nanowire, it's p=1/2p=1/2p=1/2. By simply measuring how conductivity changes with temperature, we can determine the effective dimensionality of the world in which the electrons are moving!

This dimensionality isn't always what it seems. Consider a 2D sheet of material patterned into a very narrow ribbon of width WWW. At high temperatures, the optimal hop is short, and the electrons experience a 2D world (p=1/3p=1/3p=1/3). But as we lower the temperature, the optimal hopping distance grows. Eventually, it becomes much larger than the width of the ribbon (Ropt≫WR_{opt} \gg WRopt​≫W). At this point, the search for a destination state is effectively confined to a line along the ribbon. The problem becomes quasi-one-dimensional, and the exponent cleverly switches to p=1/2p=1/2p=1/2, just as the theory predicts for d=1d=1d=1. This beautiful phenomenon, known as a ​​dimensional crossover​​, has been confirmed in experiments and showcases the predictive power of the model.

The other parameter in Mott's law, the ​​characteristic temperature​​ T0T_0T0​, is not just a fit parameter either. The derivation shows that it's packed with the microscopic physics of the material:

kBT0∝1gFξdk_B T_0 \propto \frac{1}{g_F \xi^d}kB​T0​∝gF​ξd1​

A high density of states (gFg_FgF​) or a large localization length (ξ\xiξ, meaning less-trapped electrons) makes hopping easier, resulting in a smaller T0T_0T0​ and thus higher conductivity.

It's also reassuring that a completely different theoretical approach, using ​​percolation theory​​, yields the exact same exponent. In that picture, conduction occurs when a continuous chain of "easy" hops percolates across the entire sample, like water finding a path through coffee grounds. The condition for this percolation threshold to be met leads directly back to Mott's law.

A World of Hoppers: Crossovers and Interactions

Mott's law for VRH is a master theory for transport at very low temperatures. But physics is a story of regimes and crossovers. At slightly higher temperatures, the energy penalty for hoping is less severe. An electron might not need to search far and wide for the perfect hop; simply jumping to the ​​nearest neighbor​​ might be good enough. This nearest-neighbor hopping (NNH) follows a simpler Arrhenius-like law. Therefore, as we cool a disordered material, we often see a ​​crossover​​ from NNH behavior to VRH behavior at a specific temperature TcT_cTc​, which can be calculated by seeing where the two mechanisms give the same conductivity.

An even more profound complication arises when we stop ignoring a fundamental fact of nature: electrons are charged particles that repel each other. This Coulomb repulsion is a long-range force and it dramatically alters the energy landscape. The very act of moving an electron from an occupied site to an empty site creates a positive-negative charge separation, which costs Coulomb energy. For the system to be stable, states very close to the average electron energy (the Fermi level) must be suppressed, as they would be too easy to rearrange. This carves out a soft "gap" in the density of states, known as the ​​Efros-Shklovskii (ES) Coulomb gap​​.

Within this gap, the density of states is no longer constant but vanishes as g(ϵ)∝∣ϵ∣d−1g(\epsilon) \propto |\epsilon|^{d-1}g(ϵ)∝∣ϵ∣d−1. If we re-run our optimization program with this new density of states, we get a stunning new result. The hopping exponent becomes p=1/2p=1/2p=1/2, independent of the spatial dimension ddd. This universal exponent stems from the fact that the underlying Coulomb interaction energy (E∼1/rE \sim 1/rE∼1/r) has a form that doesn't depend on the dimension of the space it's acting in.

This leads to the ​​Efros-Shklovskii VRH​​ law:

σ(T)∼exp⁡[−(T1T)1/2]\sigma(T) \sim \exp\left[ - \left(\frac{T_1}{T}\right)^{1/2} \right]σ(T)∼exp[−(TT1​​)1/2]

So, at the very lowest temperatures, we expect to see this universal T−1/2T^{-1/2}T−1/2 behavior. The world of hopping is thus a rich tapestry of competing effects. As we lower the temperature, we might first cross over from NNH to Mott VRH, and then at even lower temperatures, from Mott VRH to ES-VRH.

We can even test this idea. The ES law relies on the long-range nature of the Coulomb force. What if we screen it? By placing a metallic plate near our sample, we can effectively cut off the Coulomb interaction at distances larger than the plate separation, rgr_grg​. The theory then predicts that at temperatures so low that the optimal ES hop would be longer than rgr_grg​, the ES mechanism should fail. The system should revert to Mott VRH, where only short-range properties matter. Amazingly, this is precisely what is observed, providing a powerful confirmation of our understanding of the intricate dance between disorder, quantum mechanics, and electron interactions that governs the flow of charge in this fascinating corner of the physical world.

Applications and Interdisciplinary Connections

So, we have this marvelous idea of variable-range hopping. At its heart is a simple, yet profound, principle of optimization. An electron trapped on a localized site in a disordered landscape doesn’t just hop to its nearest neighbor, nor does it attempt an impossibly distant leap. Instead, it plays a statistical game, finding the "Goldilocks" hop—not too far in distance, not too high in energy. This beautiful compromise, quantified by Sir Nevill Mott, gives rise to the characteristic temperature dependence we've explored.

But a physical law is only as powerful as its reach. Does this elegant concept just live in a theorist's notebook, or do we see its fingerprints in the real world? This is where the story gets truly exciting. We are about to embark on a journey to see how this single idea provides the key to understanding a staggering variety of phenomena, from the silicon chips in your pocket to the exotic quantum fluids at the frontiers of physics.

The Heart of the Digital Age: Doped Semiconductors

Let's start with the material that defines our era: silicon. A perfectly pure crystal of silicon at low temperatures is an excellent insulator. The electrons are all locked into their bonds. Now, let's play God on a microscopic scale and sprinkle in some impurities, say, phosphorus atoms. Each phosphorus atom donates an extra electron that isn't needed for bonding. This electron is loosely bound to its parent phosphorus ion, forming a small, isolated "island" of localized charge in the vast, insulating silicon sea.

At low impurity concentrations, these islands are far apart, and the electrons remain stranded. But as we add more and more dopants, the islands get closer. An electron on one island can now sense the presence of another one nearby. It can't swim through the sea, but it can hop from one island to the next. This is precisely the world of variable-range hopping. If you measure the conductivity of such a doped semiconductor at low temperatures, you don't see a simple exponential rise. Instead, you find that the logarithm of the conductivity, ln⁡σ\ln \sigmalnσ, is proportional to −T−1/4-T^{-1/4}−T−1/4. This peculiar scaling is the unmistakable signature of Mott's law in three dimensions, a direct confirmation that electrons are making those optimized hops.

What happens if we keep adding dopants? Eventually, the islands begin to overlap so much that they merge, forming a continuous "continent" of conducting states that spans the entire crystal. At this point, the electrons are no longer localized; they are free to roam. The material has undergone a dramatic transformation—a Mott insulator-to-metal transition—from an insulator governed by hopping to a true metal with conventional band conduction. Variable-range hopping provides the perfect description of the electronic landscape on the insulating side of this fundamental divide.

A World of Disorder: Glasses, Plastics, and Leaky Capacitors

The beauty of Mott's argument is that it relies on disorder, so it's not just confined to nearly-perfect crystals. In fact, it shines brightest in truly messy systems.

Consider the insulator in one of the billions of capacitors inside a modern computer chip. Its job is to block the flow of current. An ideal insulator would do this perfectly. But real materials are never perfect. They always contain defects—a missing atom here, an impurity there—which create localized states, or "traps," within the insulating band gap. An electron can't jump into the high-energy conduction band, but it can find a zig-zag percolation path by hopping from one trap to another. This process gives rise to a tiny but measurable "leakage current" that follows the laws of hopping conduction. For electrical engineers trying to build smaller and more efficient devices, this leakage is a critical problem, and understanding it through the lens of VRH is the first step toward controlling it.

The story gets even more compelling when we enter the world of "soft matter." Can you imagine a plastic that conducts electricity? These materials, known as conjugated polymers, are the basis for technologies like flexible OLED displays and printable solar cells. They consist of long, chain-like molecules tangled together in a jumble. For a charge carrier to move through this mess, it must hop from one molecular chain to a neighboring one. This is a textbook scenario for VRH. What's truly amazing is that the theory is so powerful that by simply measuring how the material's conductivity changes with temperature, we can use Mott's law to work backwards and calculate the average size of the electron's quantum wavefunction on these molecules! This "localization length," ξ\xiξ, is a microscopic quantity, perhaps only a nanometer or two, yet we can deduce it from a macroscopic measurement with a voltmeter and a thermometer. The same principles even apply to discotic liquid crystals, where disc-shaped molecules stack into columns, and the conductivity between columns is governed by two-dimensional hopping. The astounding universality of hopping physics connects the behavior of materials as different as doped silicon and molecular goo.

Probing the Hop with Magnetic Fields

We can do more than just observe hopping; we can actively probe it. What happens if we apply a strong magnetic field? Let's picture the electron's wavefunction on its localized island as a fuzzy quantum cloud. A magnetic field has a fascinating effect: it squeezes this cloud, forcing the electron into a tighter orbit. Now, if this cloud is smaller, its edges have less overlap with the cloud on a neighboring island. This reduced overlap makes the quantum tunneling process—the hop—less probable.

The consequence is clear: the material's resistance should increase. This phenomenon is known as positive magnetoresistance. Not only does the resistance go up, but the theory of hopping allows us to predict precisely how it should depend on the field strength, typically as B2B^2B2 in weak fields. The observation of this effect in a vast range of disordered insulators is another powerful piece of evidence that our picture of hopping between localized states is fundamentally correct.

Hopping at the Frontiers of Physics

The concept of variable-range hopping is so robust that it extends into some of the most exotic and celebrated territories of modern condensed matter physics.

One such territory is the fractional quantum Hall effect. In this bizarre state of matter, which appears in a two-dimensional sheet of electrons at extremely low temperatures and in colossal magnetic fields, electrons cease to act individually. They conspire to form a new type of quantum fluid whose elementary excitations behave like particles with a fraction of an electron's charge (e.g., e⋆=e/3e^{\star} = e/3e⋆=e/3). Even in these incredibly pure systems, a faint landscape of disorder potential creates "puddles" where these fractionally charged quasiparticles can become localized. For a current to flow, these quasiparticles must hop from one puddle to the next.

Here, however, there's a crucial new ingredient. The quasiparticles interact with one another via the long-range Coulomb force. This strong repulsion profoundly alters the energy landscape, carving out a "soft gap" in the density of states available for hopping near the chemical potential. Mott's original derivation assumed a constant density of states. When we re-apply its optimization logic to this new, Coulomb-gapped landscape, we discover a related but distinct law. The conductivity is predicted to follow an exp⁡[−(TES/T)1/2]\exp[-(T_{ES}/T)^{1/2}]exp[−(TES​/T)1/2] dependence, a law first worked out by Efros and Shklovskii. We are using the essential idea of hopping not just for electrons, but for emergent particles with fractional charge, revealing deep truths about strongly interacting quantum matter.

Finally, what happens if we give the hopping electrons some help? Imagine we illuminate our disordered material with a high-frequency laser. A photon can provide a large, fixed packet of energy, enabling an electron to make a long-distance hop to a resonant site it couldn't otherwise reach. If the radiation is strong enough, it can effectively link many sites into large conductive clusters. The bottleneck for conduction is now the much slower, phonon-assisted hop between these large clusters. The hopping distance is no longer variable; it's fixed by the typical cluster separation. In this case, the "variable-range" part of the argument vanishes. The electron simply has to wait for a phonon of the right energy to make the jump. The physics simplifies, and we recover a simple thermally activated (Arrhenius) law, σ∝exp⁡(−Ea/kBT)\sigma \propto \exp(-E_a/k_B T)σ∝exp(−Ea​/kB​T). This shows how Mott's law is a beautiful and general principle that contains simpler models within it, unifying our understanding of transport in disordered systems.

From the bedrock of our digital world to the gossamer threads of conducting plastics and the strange quantum seas of the Hall effect, the simple principle of an electron finding its optimal path continues to illuminate our understanding of the physical world. It is a stunning example of how a beautiful physical argument can have consequences that are, quite literally, all around us.