try ai
Popular Science
Edit
Share
Feedback
  • Transferable Potentials

Transferable Potentials

SciencePediaSciencePedia
Key Takeaways
  • Transferable potentials, or force fields, are simplified classical models that define reusable rules for atomic interactions, enabling large-scale molecular simulations.
  • Force fields are constructed by summing bonded (springs, angles, torsions) and non-bonded (Lennard-Jones, Coulomb) energy terms, with specific exclusion rules to avoid double-counting.
  • The core limitation of simple force fields is the pairwise additivity assumption, which fails to capture many-body polarization effects, making potentials fundamentally state-dependent.
  • A hierarchy of models, from fixed-charge to polarizable and reactive force fields, allows scientists to balance computational cost with physical accuracy for diverse applications.

Introduction

To predict the behavior of complex molecular systems like a folding protein or a growing crystal, we must simplify the impossibly complex laws of quantum mechanics. This simplification is the core idea behind transferable potentials, also known as force fields. Instead of tracking every electron, we treat atoms as classical spheres whose interactions are governed by a defined set of rules. The central challenge and ambition in molecular simulation is to develop a single, universal set of these rules—a transferable potential—that can accurately describe any molecule in any environment. This article addresses the quest for such a universal model, exploring both its incredible power and its inherent limitations.

This exploration will guide you through the foundational concepts and practical applications of transferable potentials. In the "Principles and Mechanisms" chapter, we will dissect the anatomy of a force field, examining the physical meaning behind its components and confronting the theoretical reasons, such as many-body effects, that limit perfect transferability. Following this, the "Applications and Interdisciplinary Connections" chapter will demonstrate how these potentials serve as a chemist's Lego set, enabling the modeling of everything from simple organic molecules and complex proteins to advanced materials and even chemical reactions.

Principles and Mechanisms

Imagine you want to build anything in the world—a car, a computer, a skyscraper. You wouldn't start by calculating the quantum mechanical interactions of every single atom in your steel beams and silicon chips. That would be insane! Instead, you use a set of well-understood rules: the tensile strength of steel, the electrical resistance of copper, the principles of mechanics. You work with effective, macroscopic properties.

Molecular simulation faces a similar challenge. Our goal is to predict the behavior of complex systems—a protein folding, a drug binding to a target, a crystal growing from a solution. While the universe is governed by the beautiful and precise laws of quantum mechanics, solving the Schrödinger equation for a mole of water molecules is, and will likely remain, an impossible task. So, we make a brilliant simplification. We step back from the fuzzy, probabilistic world of electrons and wavefunctions and treat atoms as classical objects—tiny spheres, moving according to Newton's laws. The question then becomes: what are the rules of their interaction? What are the "forces" between them? This set of rules is what we call a ​​force field​​, or a ​​potential energy function​​. The dream is to find a single, transferable set of rules that works for all molecules, in all situations—a kind of universal LEGO set for building matter.

An Atomic LEGO Set: The Anatomy of a Force Field

How do we assemble such a set of rules? We break down the complex dance of atoms into a sum of simpler, more manageable pieces. A typical force field is a masterpiece of pragmatic physics, partitioning interactions into two main categories: those between atoms that are chemically bonded, and those that are not.

The Push and Pull of Strangers: Non-Bonded Interactions

Atoms that aren't directly connected by a chemical bond still feel each other's presence. They attract each other weakly at a distance, and repel each other strongly if they get too close. Think of it like people in a crowded room; they try not to bump into each other, but they might still interact socially. A wonderfully effective way to model this is the ​​Lennard-Jones (LJ) potential​​.

The LJ potential for two neutral atoms separated by a distance rrr has a beautifully simple form:

ULJ(r)=4ϵ[(σr)12−(σr)6]U_{LJ}(r) = 4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6}\right]ULJ​(r)=4ϵ[(rσ​)12−(rσ​)6]

This formula contains two distinct parts, each with a clear physical meaning. The first term, (σr)12(\frac{\sigma}{r})^{12}(rσ​)12, is a steep, repulsive wall. The power of 121212 makes it incredibly powerful at short distances, modeling the ​​Pauli repulsion​​ that prevents two atoms' electron clouds from occupying the same space. It's the reason matter is solid and you don't fall through the floor. The second term, −(σr)6-(\frac{\sigma}{r})^{6}−(rσ​)6, is a gentler, long-range attraction. This term models the fleeting, temporary dipoles that arise from the sloshing of electrons within atoms, a phenomenon known as ​​London dispersion forces​​. It’s a subtle "stickiness" that helps hold liquids and solids together.

The two parameters, ϵ\epsilonϵ and σ\sigmaσ, are the knobs we can tune for each type of atom. As derived in the classic analysis of this potential, these parameters have direct physical interpretations. The parameter σ\sigmaσ defines the effective "size" of the atom; it is the distance at which the potential energy is zero. The parameter ϵ\epsilonϵ defines the "stickiness," or the depth of the potential well. The most stable separation between the two atoms occurs at a distance rmin⁡=21/6σr_{\min} = 2^{1/6}\sigmarmin​=21/6σ, where the attractive energy is maximized at a value of −ϵ-\epsilon−ϵ. By assigning different ϵ\epsilonϵ and σ\sigmaσ values to different "atom types" (e.g., a carbon in a methane molecule versus a carbon in a benzene ring), we begin to build our chemical LEGO set.

Of course, atoms can also carry net charges. For these, we add another, more familiar term: the ​​Coulomb potential​​, which describes the electrostatic interaction between partial charges assigned to each atom.

The Covalent Handshake: Bonded Interactions

Atoms linked by chemical bonds are governed by a different set of rules. We model bonds as springs, penalizing stretches or compressions from an ideal length. We model bond angles as flexible joints, penalizing deviations from an equilibrium angle. But perhaps the most interesting bonded term is the ​​torsional potential​​, which governs rotation about a chemical bond.

Why is it easy to rotate around a C-C single bond in ethane, but incredibly difficult to rotate around the C=C double bond in ethene? The answer lies in the quantum mechanical nature of orbital overlap. In a double bond, the π\piπ-bond is formed by the sideways overlap of ppp-orbitals. This overlap is maximal when the molecule is planar and drops to zero when the bond is twisted by 90 degrees. This loss of stabilizing overlap creates a large energy barrier. This physical insight informs the mathematical form of our torsional potential. The stabilization energy is often proportional to cos⁡2(ϕ)\cos^2(\phi)cos2(ϕ), where ϕ\phiϕ is the dihedral angle. Using a simple trigonometric identity, this becomes a term proportional to cos⁡(2ϕ)\cos(2\phi)cos(2ϕ), a "twofold" periodic potential that captures the two equivalent low-energy planar states during a full 360-degree rotation. The same principle explains why the peptide bond in proteins is rigid and planar: the lone pair on the nitrogen atom conjugates with the carbonyl group, creating a partial double bond character that is destroyed upon rotation.

Assembling the Model: Don't Count Twice!

Now we have a collection of springs and magnets. A critical detail emerges when we put them together: how do we avoid double-counting interactions? Consider a chain of atoms 1-2-3-4. The distance between atoms 1 and 3 is constrained by the 1-2 and 2-3 bond lengths and the 1-2-3 bond angle. The energy associated with this geometry is already captured by the bond and angle potentials. If we were to also include a Lennard-Jones interaction between atoms 1 and 3, we would be modeling their repulsion twice! This would corrupt our model and make the parameters non-transferable.

The standard, elegant solution is to establish a set of exclusion rules. Non-bonded interactions are completely ​​excluded​​ for atoms connected by one bond (1-2 pairs) or two bonds (1-3 pairs). Their interactions are assumed to be implicitly handled by the bonded terms. The case of 1-4 pairs is more subtle. The torsional potential for the 1-2-3-4 dihedral and the direct non-bonded interaction between atoms 1 and 4 both contribute to the rotational energy barrier. Excluding the 1-4 non-bonded term would force the torsional parameters to be highly system-specific. Including it at full strength would lead to double-counting. The pragmatic compromise is to include the 1-4 non-bonded interaction but ​​scale it down​​ by a specific factor (e.g., 0.5 or 0.83). This clever "1-4 scaling" apportions the interaction, allowing both the non-bonded and torsional parameters to remain more general and, therefore, more transferable.

The Heart of the Matter: The Quest for Transferability

The central goal of a general-purpose force field is ​​transferability​​. This means that the parameters defined for a specific atom type—say, an sp² carbon in an aromatic ring—should be reusable in any molecule containing that chemical moiety. This is the difference between having a model that can only describe a single molecule it was trained on (representativity) and having a model that can predict the behavior of new, unseen molecules (predictivity).

How do we decide if a parameter can be transferred? Consider the practical task of modeling a modified DNA base. If we start with a cytidine molecule and add a simple methyl group, the electronic perturbation to the original ring is minor. We can likely transfer the charge and LJ parameters for the original atoms and simply add standard parameters for the new methyl group. But what if we replace a hydroxyl group on the sugar with a highly electronegative fluorine atom? This substitution drastically changes the local electronic environment, pulling electron density towards the fluorine. If we try to transfer the old charges, our model will fail to reproduce the correct electrostatic potential. The conformational preferences of the sugar ring will also change significantly. In this case, transferability fails. We must go back to the drawing board and derive new, specialized parameters for the sugar, a process known as ​​reparameterization​​. The most dramatic failure of transferability occurs when the formal charge state changes, for example, when an amine group becomes protonated. Trying to use parameters from a neutral molecule for a cation is a recipe for disaster.

This decision process is guided by quantitative metrics. We can check how well the transferred charges reproduce a high-level quantum mechanics electrostatic potential. If the error is too large, we must refit. We can compare torsional energy profiles from our model to those from QM. If the barrier heights are off by more than a certain tolerance, we refit. This systematic validation is the disciplined art of force field development.

The parameters themselves are not arbitrary. They are "learned" by fitting the classical model to reproduce data from either experiments or, more commonly, high-level quantum mechanical calculations. For partial charges, a standard procedure is ​​Restrained Electrostatic Potential (RESP) fitting​​. Here, we first use QM to compute the "true" electrostatic field around a molecule. Then, we find the set of atom-centered point charges that best reproduces this field, like an artist creating a pointillist sketch that captures the essence of a complex photograph. More advanced techniques like ​​force matching​​ or ​​relative entropy minimization​​ aim to match the forces or even the entire statistical distribution of atomic configurations from a high-fidelity reference simulation, providing powerful routes to creating accurate and robust potentials.

The Sobering Reality: The Limits of Transferability

The dream of a perfectly transferable potential—a single set of LEGO bricks for all of chemistry—runs into a formidable obstacle: the many-body problem. Our simple force field is built on a crucial assumption: ​​pairwise additivity​​. It assumes the total energy is just the sum of interactions between pairs of atoms. The force between atom A and atom B is independent of whether atom C is nearby. In reality, this is not true.

A brilliant illustration of this limitation comes from comparing a molecule in the gas phase to the same molecule in a liquid. We can painstakingly parameterize a Lennard-Jones potential to perfectly reproduce the interaction energy and equilibrium distance of two molecules forming a dimer in a vacuum. Our model is perfectly representative of this two-body system. But when we take this potential and try to predict a property of the liquid, like its heat of vaporization, the prediction can be wildly inaccurate. Why?

In a dense liquid, a molecule is not interacting with just one neighbor; it is in a constant, jostling crowd. The collective electric field of all its neighbors polarizes the molecule, distorting its electron cloud and changing its effective dipole moment. This is a ​​many-body effect​​. The interaction between A and B is affected by the presence of C, D, E, and all the others. A simple pairwise-additive potential, by its very construction, cannot capture this physics.

This leads us to the deeper concept of the ​​Potential of Mean Force (PMF)​​. The effective interaction potential we use in a simulation is not the "bare" vacuum interaction; it is a free energy that has implicitly averaged over all the motions of the surrounding solvent molecules. In the zero-density limit (a vacuum), the PMF is identical to the bare pair potential. But at any finite density, it includes these averaged many-body contributions. Because the structure and dynamics of the environment change with temperature and density, the PMF is fundamentally ​​state-dependent​​. This is the formal reason for the limits of transferability. A potential derived to reproduce the structure of a liquid at one temperature will not, in general, be correct at another temperature. This is underscored by ​​Henderson's Uniqueness Theorem​​, which guarantees that a pair potential is unique for a given structure at a specific state point, but makes no promises about its validity elsewhere.

This state-dependence is especially dramatic for charged species like ions. A neutral methane molecule interacts via short-range forces, and its effective potential is relatively insensitive to the broader environment. An ion, however, with its powerful long-range electric field, organizes the entire solvent and any other ions into a complex, structured "atmosphere" around itself. This screening effect depends sensitively on the ion concentration and the solvent's dielectric properties. A potential for an ion is a PMF that has swallowed an enormous amount of environmental information, making it acutely non-transferable.

Fortunately, we are not stuck. We have a hierarchy of models that allow us to trade computational cost for physical accuracy.

  1. ​​Fixed-Charge Models​​: Cheap, fast, but with limited transferability. They are excellent for exploring the conformational space of large biomolecules but less reliable for properties that depend on environmental response.
  2. ​​Polarizable Models​​: More expensive, as they must self-consistently calculate how each atom's electron cloud responds to its neighbors at every step. By explicitly modeling electronic polarization, they are vastly more transferable across different phases and environments.
  3. ​​Explicit Many-Body Potentials​​: The most expensive, these models go beyond pairwise additivity and include explicit three-body (or higher) interaction terms. They are designed to be highly accurate and transferable representations of the true quantum mechanical potential energy surface.

The journey to create transferable potentials is a profound scientific endeavor. It is a continuous cycle of developing clever approximations, rigorously testing their limits, and then building better, more physically complete models. It is the art of encoding our fundamental understanding of how nature works into a set of rules that a computer can follow, bringing us ever closer to the ultimate goal of predicting and designing the world of molecules from the bottom up.

Applications and Interdisciplinary Connections

Having journeyed through the principles and mechanisms of transferable potentials, we might feel like we've learned the grammar of a new language. But grammar alone is not poetry. The real joy comes when we use this language to describe the world, to tell stories about how things work. So now, let's step out of the abstract and into the bustling world of atoms to see what tales these potentials can tell. We'll find that this single idea—that the essence of atomic interactions can be captured and reused—is a golden thread connecting a startlingly diverse tapestry of scientific inquiry, from the simplest molecules to the very fabric of life and the materials of our future.

The Chemist's Lego Set: Building Molecules from the Bottom Up

Imagine you have a box of Lego bricks. Some are red, some blue; some are long, some short. You know that any two red bricks connect in the same way, and this allows you to build anything from a simple wall to a complex castle. The art of creating a transferable force field is much like designing this perfect Lego set for atoms. The challenge is deciding which atoms count as the "same" type of brick.

Consider the humble alkanes, the backbone of organic chemistry. A simple hydrocarbon like butane has two kinds of carbon atoms: the ones at the end (CH3\mathrm{CH}_3CH3​) and the ones in the middle (CH2\mathrm{CH}_2CH2​). It might seem natural to create two "atom types," a terminal one and an internal one. This works beautifully for all straight-chain alkanes. But what happens when the chain branches? In isobutane, a new character appears: a carbon bonded to three other carbons (CH\mathrm{CH}CH). And in neopentane, we find a carbon bonded to four others (C\mathrm{C}C).

If our Lego set only has "end" and "middle" pieces, we simply cannot build these branched structures correctly. The interactions will be wrong, and our simulation will predict a liquid that behaves unlike the real thing. To create a potential that is truly transferable across the whole family of saturated hydrocarbons, we must recognize that the local environment matters. We need a minimal set of four distinct carbon "bricks": the primary CH3\mathrm{CH}_3CH3​, the secondary CH2\mathrm{CH}_2CH2​, the tertiary CH\mathrm{CH}CH, and the quaternary C\mathrm{C}C. With this carefully chosen set, we can now build and accurately simulate countless different organic molecules, a testament to the power of identifying the right fundamental building blocks.

This "Lego set" philosophy extends to the very heart of biology. Proteins are chains of amino acids, and their function depends on folding into precise three-dimensional shapes. One of the crucial events in this folding process is the formation of a disulfide bridge, where two cysteine residues (CYS) link up to form a cystine (CYX). This is not just a gentle coming-together; it's a chemical transformation. An S–H bond on each cysteine breaks, and a new S–S bond forms, creating a strong covalent staple that holds the protein's structure in place.

A transferable force field must be able to describe both states. This doesn't mean the parameters for a sulfur atom are the same in both cases—quite the contrary! It means the force field library contains distinct, pre-calibrated parameter sets for the thiol (CYS) and disulfide (CYX) states. When the bond forms, the simulation engine effectively swaps out the parameter set. The atom type for sulfur changes, altering its size and attraction parameters. The stiff, short S-H bond is replaced by a longer, more flexible S-S bond. New angle terms appear, and a crucial dihedral potential is introduced to govern the twist around the new C–S–S–C axis. This isn't a failure of transferability; it's a success of careful bookkeeping. The potential is transferable because it has been parameterized to handle these specific, recurring chemical motifs found across the entire protein universe.

Beyond Molecules: Materials, Sheets, and Strange Liquids

The same principles that allow us to model a flexible protein can be adapted to describe the rigid perfection of a material like graphene. Here, we have an infinite, flat sheet of carbon atoms arranged in a honeycomb lattice. What kind of potential do we need? We certainly need a bond-stretching term to get the bond lengths right and an angle-bending term to enforce the 120∘120^\circ120∘ angles of the honeycomb. But what about out-of-plane motion? A sheet with only these two terms would be floppy like a handkerchief. To give it the characteristic flexural rigidity of a 2D material, we need a term that penalizes bending—an "improper torsion" potential that ensures each carbon and its three neighbors stay in a plane. Notice what we can leave out: for a single, pristine sheet, every atom is identical, so there are no partial charges and thus no Coulomb forces. By tailoring the potential to the essential physics, we can capture the behavior of this wonder material.

But just as we feel we've mastered the rules, nature throws us a curveball. Consider an ionic liquid—a salt that is molten at room temperature. It's a fluid composed entirely of charged cations and anions, a chaotic dance of positive and negative. If we try to model this using our standard fixed-charge force field, disaster strikes. Our simple models, often parameterized using isolated ions, drastically overestimate the electrostatic attraction in this dense, highly-charged soup. The simulated liquid becomes as viscous as honey, with ions crawling past each other a thousand times slower than in reality.

Here we hit a fundamental limit of simple transferable potentials. The assumption of a "fixed charge" on an atom breaks down when the local electric field is immensely strong and changes from point to point. In reality, the electron clouds of the ions polarize each other, screening their charges. To capture this, we need more sophisticated, polarizable force fields, where the charge distribution can respond to its environment. Furthermore, the simple mixing rules that work for neutral molecules often fail for the oddly shaped ions, requiring specific, non-standard parameters for each cation-anion pair. Ionic liquids teach us a crucial lesson: transferability is not guaranteed. It is a working approximation that can fail, and its failure points us toward deeper physics.

Zooming Out and Zooming In: A Multiscale Universe

So far, our potentials have operated at the all-atom level. But what if we want to simulate something truly enormous, like the assembly of a viral capsid or the folding of an entire chromosome? Simulating every atom becomes computationally impossible. The solution is to "zoom out" and adopt a coarse-grained (CG) description. Instead of modeling every atom of a protein, we might represent an entire amino acid as a single bead.

This act of coarse-graining is itself an exercise in creating a transferable potential, but at a higher level of abstraction. The key challenge is to create an effective potential between these beads that reproduces the correct large-scale behavior. For a problem like protein-ligand binding, this involves delicate trade-offs. We can, for instance, replace the explicit water solvent with an "implicit" model, which is a coarse-grained potential of mean force that captures the average effect of water. This can preserve the thermodynamics of binding, like the binding free energy, but it often messes up the kinetics—the speed of binding and unbinding—because it erases the friction and viscosity of the water. If we get too aggressive and coarse-grain the small ligand molecule itself into a simple sphere, we might lose the very shape and charge complementarity that allows it to bind to the protein with specificity. The art of coarse-graining lies in knowing what details you can afford to lose.

Just as we can zoom out, we can also zoom in. The world of classical potentials is built upon the foundation of quantum mechanics. Interestingly, the concept of transferable potentials appears there too. In quantum calculations of heavy elements, explicitly modeling all the electrons is a nightmare. But the inner-shell "core" electrons are tightly bound and chemically inert. Their main effect on the outer "valence" electrons is to screen the nuclear charge and, via the Pauli exclusion principle, to keep the valence electrons out of the core region. This entire complex effect can be bundled into an effective core potential (ECP), or pseudopotential. This ECP is a transferable object; the ECP for a sodium atom (which has a neon-like core) is a great starting point for the ECP of a magnesium ion, Mg+\mathrm{Mg}^+Mg+, which shares the same core.

The ultimate marriage of these two worlds is the hybrid QM/MM method, where a small, chemically active region (e.g., the active site of an enzyme) is treated with quantum mechanics, while the vast surrounding environment (the rest of the protein and solvent) is treated with a classical, transferable potential. For this to work, the two regions must communicate. In the most sophisticated schemes, this coupling is a two-way street. The classical atoms' charges polarize the quantum electron cloud, and in return, the quantum electron cloud's electric field polarizes the classical atoms (if a polarizable force field is used). This "mutual polarization" requires a delicate self-consistent dance where each part adapts to the other until a stable state is reached.

When the Rules Must Break: Modeling Chemical Reactions

There is one final frontier for our classical potentials. By their very construction, with bonds modeled as harmonic springs, they describe a world with a fixed topology. Bonds can stretch, bend, and twist, but they can never break. What, then, are we to do about chemistry itself?

Consider one of the most fundamental chemical acts: a proton transfer, where a proton hops from a hydronium ion (H3O+\mathrm{H_3O^+}H3​O+) to a neighboring water molecule. A standard force field is blind to this event. But scientists, in their ingenuity, have devised ways to teach these old potentials new tricks.

One approach is the reactive force field (ReaxFF). Here, the very idea of a fixed bond is thrown out. Instead, a "bond order" is calculated on the fly as a continuous function of interatomic distance. As a proton moves away from its original oxygen and toward another, its bond order with the first oxygen smoothly decreases from one to zero, while its bond order with the second smoothly increases from zero to one. All the energy terms are cleverly designed to depend on these bond orders, yielding a seamless and continuous potential energy surface that can describe the entire reaction coordinate.

Another elegant solution is the Empirical Valence Bond (EVB) method. Here, we imagine the system as a quantum-mechanical mixture of two classical states: State 1, where the proton is on the first water molecule ((H3O+)⋯(H2O)(\mathrm{H_3O^+}) \cdots (\mathrm{H_2O})(H3​O+)⋯(H2​O)), and State 2, where it's on the second ((H2O)⋯(H3O+)(\mathrm{H_2O}) \cdots (\mathrm{H_3O^+})(H2​O)⋯(H3​O+)). Each of these states can be described by a normal, non-reactive force field. The EVB method then introduces a coupling term that allows the system to smoothly transition from one state to the other, giving a ground-state energy surface that correctly describes the bond-breaking and bond-forming process.

The Craftsman's Workshop: Forging the Potentials

By now, you might be wondering where all these magical parameters—the bond stiffnesses, the equilibrium angles, the partial charges—actually come from. They are not pulled from thin air. They are the product of a painstaking craft, a rigorous process of parameterization that is a scientific discipline in its own right.

The goal is to create a potential that reproduces a set of high-quality reference data, either from precise quantum chemistry calculations or from experiments. This is a complex optimization problem. For instance, when parameterizing a semi-empirical quantum model for polyenes, one must fit the model's fundamental parameters (α\alphaα, β\betaβ, UUU) to reproduce not just the colors of the molecules (their excitation energies), but also their ionization potentials. Why both? Because excitation energies are energy differences, which are insensitive to the absolute energy scale. Including the ionization potential—the energy to remove an electron entirely—pins down this absolute scale and makes the parameter set robust and physically meaningful.

Similarly, when developing a coarse-grained model for a polymer blend, the goal might be to reproduce the thermodynamics of mixing, encapsulated in the famous Flory-Huggins χ\chiχ parameter. One sophisticated approach is to parameterize the coarse-grained potential to match thermodynamic data derived from the microscopic structure, such as Kirkwood-Buff integrals, across a wide range of temperatures and compositions. The ultimate test of such a potential is its transferability: are the parameters fitted at one temperature and composition able to predict the behavior—for example, whether the two polymers will mix or separate—at another? To truly validate this, one must use techniques like cross-validation, where the model is tested on data it was not trained on.

This process reveals the deep truth of transferable potentials. They are not just arbitrary mathematical functions; they are simplified physical models, distilled essences of a more complex reality. Their power and their beauty lie in this act of distillation, in capturing the fundamental rules of atomic interaction in a form that is simple enough to compute, yet rich enough to predict the emergent behavior of matter in all its fascinating complexity. From a protein folding in a cell to a polymer blend in a factory, from a sheet of graphene to the heart of a chemical reaction, the concept of a transferable potential provides a unified and profoundly useful way of seeing the world.