
In the quantum mechanical view of a molecule, electrons exist not as distinct particles tied to individual atoms, but as a delocalized cloud of probability density. This poses a fundamental challenge: how can we assign a specific number of electrons to each atom to quantify intuitive chemical concepts like atomic charge, polarity, and bond type? Without a method for this 'electron accounting,' we are left with a holistic but impractical picture of molecular structure. This article delves into one of the earliest and most influential solutions to this problem: Mulliken population analysis.
First, the article will explore the Principles and Mechanisms of the method, breaking down its elegantly simple rule for partitioning shared electron density and its formulation in the language of computational chemistry. We will also uncover the deep conceptual flaws and mathematical frailties that make its results highly dependent on the chosen computational framework. Following this, the chapter on Applications and Interdisciplinary Connections will examine what the method is good for, demonstrating how it can illuminate chemical trends and bonding interactions, while also comparing its performance against more physically robust modern techniques. Through this exploration, readers will gain a comprehensive understanding of not just how Mulliken analysis works, but also its important place in the historical and pedagogical landscape of computational chemistry.
So, we've talked about molecules being collections of atoms held together by a shared sea of electrons. But this picture, while correct, is a bit like saying a city is a collection of people. It doesn't tell us much about the neighborhoods, the communities, or how individuals group together. How can we get a bit more local? How can we ask a seemingly simple question: in a molecule like water, , how many electrons "belong" to the oxygen, and how many to each hydrogen? This question is the gateway to understanding chemical concepts like polarity, bond types, and reactivity. The answer, as you might guess, is not so simple. It's not like the electrons are wearing little jerseys with "O" or "H" on them.
This is where population analysis comes in. It's a set of accounting rules designed to partition the molecule's total electron cloud and assign portions of it back to the individual atoms. One of the oldest, simplest, and most instructive of these schemes is the Mulliken population analysis, named after Robert S. Mulliken. It’s a beautiful example of a simple idea that is both incredibly useful for building intuition and deeply flawed in ways that teach us profound lessons about quantum mechanics.
Let's imagine the simplest possible molecule that's more than just a single atom: a chemical bond between two different atoms, A and B. In the language of quantum chemistry, we often describe the electrons in this bond using a Molecular Orbital (), which we build by mixing together Atomic Orbitals ( and ). Think of the atomic orbitals as the "home turf" for electrons on their respective isolated atoms. In the molecule, the electron's new home, the molecular orbital, is a hybrid, a linear combination:
Here, and are coefficients that tell us the "flavor" or "character" of the MO—is it more A-like or more B-like?
Now, the probability of finding the electron at any point in space is given by the square of the wavefunction, . Let's expand this out (assuming for simplicity the coefficients and orbitals are real):
Look closely at these three terms. They represent the entire electron population. The first term, , involves only atom A's orbital. It seems reasonable to assign all of this electron density to atom A. Similarly, the second term, , belongs to atom B.
But what about the third term, ? This is the overlap population. It exists only in regions of space where both and are non-zero. It’s the mathematical signature of the chemical bond, the density that is truly shared between the two atoms. How do we partition this shared density?
Mulliken's answer was elegantly simple, even brutally so: split it down the middle. Fifty-fifty. No questions asked.
So, to find the total electron population on atom A, , we take all of its "own" density and add exactly half of the overlap density. When we integrate this over all space, the total population assigned to A for a single electron in this MO becomes:
Here, is the overlap integral, , which is a number that measures how much the two atomic orbitals spatially overlap. The population on atom B would be, by symmetry, .
If this molecular orbital contained two electrons, as is common in a typical covalent bond, we would simply multiply everything by two. For instance, if a calculation gave us , , and , the population on atom B would be . If atom B started out neutral with one valence electron, its partial charge would be its original electron count minus its new population: . This suggests electron density has moved from B to A, making B slightly positive.
This picture is fine for one bond, but a real molecule like caffeine has dozens of orbitals and electrons. Writing out sums for every single orbital would be a nightmare. Thankfully, the mathematics can be compressed into a much more elegant and powerful form using matrices—the language of modern computational chemistry.
Two matrices are key:
With these two matrices, the entire Mulliken population analysis for any atom A in any molecule becomes a simple, beautiful matrix operation. The gross electron population on atom A is the sum of the diagonal elements of the product matrix that correspond to basis functions on atom A. For a simple case where atom A has only one atomic orbital (let's say it's the first one in our list), the population is just the top-left element of the matrix:
So, if a computer spits out the matrices and for a molecule, all we have to do is multiply them together and read off the numbers on the diagonal. This is precisely what computational chemistry programs do to report Mulliken charges. This matrix formalism neatly packages the entire "split-the-overlap" philosophy into a single, clean calculation.
Before we start tearing this simple idea apart, let's give it a chance to prove itself. A good physical model should respect basic symmetries. For example, in a perfectly symmetric molecule like nitrogen () or oxygen (), the two atoms are absolutely identical. There's no reason one should have more electron density than the other. Their partial charges must be zero. Does Mulliken analysis pass this simple test?
Yes, it does, and beautifully so. In a homonuclear diatomic molecule, the laws of quantum mechanics demand that the molecular orbitals themselves reflect the molecule's symmetry. This forces the squared magnitudes of the LCAO coefficients on the two equivalent atoms to be identical in every single molecular orbital. When you plug this equality into the Mulliken formula and sum over all electrons, you find that the calculated electron population on atom A is exactly equal to the population on atom B. Since the molecule is neutral, the charges must sum to zero, and the only way to satisfy this is if both charges are identically zero. This is a reassuring result. It shows that Mulliken's scheme, for all its simplicity, has some physical sense hard-wired into its mathematical structure.
So far, so good. But the history of science is filled with beautiful, simple ideas that turn out to be wrong, or at least dangerously incomplete. Mulliken analysis is a prime example. Its greatest weakness stems from its core rule: the 50/50 split is based on the mathematical labels of the basis functions, not on the physical location of the electron density in 3D space.
Let's consider the carbon monoxide molecule, CO. Oxygen is more electronegative than carbon, so we expect a charge separation, with oxygen being negative and carbon being positive. If we do a calculation with a simple, "minimal" basis set, we might get a small positive charge on carbon, say .
But what happens if we use a better, more flexible "polarized" basis set? These basis sets give the electrons more freedom to move around and describe the electron cloud more accurately. When we do this, the energy of our calculated molecule gets lower (better), but the Mulliken charge on carbon might jump to something like ! The "real" charge didn't change; our accounting method gave a wildly different answer just because we used a better set of tools. This is a huge red flag. It means Mulliken charges are not physical observables; they are artifacts of the specific basis set used in the calculation.
This problem can be taken to a logical and absurd extreme with a thought experiment. Imagine a hydride ion, , which is a single proton with a cloud of two electrons around it. Now, let's do a calculation, but we'll add a "ghost atom" a short distance away. This ghost has no proton (), but we place a very diffuse, spread-out basis function at its location.
The variational principle, which guides the calculation to the lowest energy, will seize this opportunity. It will use the diffuse ghost function to better describe the puffy, weakly-held outer parts of the electron cloud, thus lowering the total energy. The calculation is now "better" in a mathematical sense. But what does the Mulliken analysis report? It sees that a significant part of the electron density is being described by a function labeled as belonging to the ghost atom. So, it assigns that electron density to the ghost! The result is a nonsensical prediction: the hydrogen atom becomes positively charged (), and the ghost atom, with no nucleus at all, acquires a huge negative charge (). This is patently absurd. The electrons are, of course, still physically associated with the only proton in the system. This example powerfully illustrates the fatal flaw: Mulliken analysis partitions electrons based on the zip codes of the basis functions, not their actual street address in physical space.
The problems don't stop there. The math behind the formula can lead to even stranger results. Because the matrix product has no special mathematical constraints on its diagonal elements, you can sometimes find that the calculated Mulliken population of a single atomic orbital is negative or greater than two. This is physically impossible—you can't have a negative number of electrons, and the Pauli exclusion principle forbids more than two electrons in a single orbital. It's another sign that the method is mathematically fragile.
More robust methods, like Löwdin population analysis, fix this specific issue by first transforming the basis set into a perfectly orthogonal one before doing any accounting. In this mathematically "cleaner" world, orbital populations are always sanely bounded between 0 and 2.
Furthermore, applying the method to more complex cases like radicals (molecules with unpaired electrons) requires extreme care. A common computational approach, Unrestricted Hartree-Fock (UHF), often produces a solution that is contaminated with artificial contributions from higher spin states. The resulting Mulliken "spin populations," which are intended to show where the unpaired electron lives, are polluted by this artifact and no longer give a clear picture.
So, where does this leave us? Mulliken population analysis is a classic, a fundamental pedagogical tool. It introduces the essential problem of partitioning electron density in a clear and intuitive way. But its arbitrary and basis-set-dependent nature means that its results should be taken with a large grain of salt, if not the whole shaker. It's a first-year physics model in a graduate-level world. It's more of a qualitative indicator than a quantitative measure. Its true value, perhaps, lies not in the numbers it produces, but in the critical questions it forces us to ask about what it truly means for an atom to "own" an electron within a molecule.
Now that we have seen the machinery of Mulliken population analysis, we might ask, what is it good for? After all, a set of numbers churned out by a computer is of little use unless it tells us something interesting about the world. We have taken apart the watch and seen the gears; now let's see what time it tells. The true power of a theoretical tool is revealed not in its mathematical elegance, but in the connections it forges between the abstract world of wavefunctions and the tangible reality of chemistry, physics, and materials science.
Mulliken's scheme, for all its beautiful simplicity, offers a first, crucial bridge. It takes the delocalized, wavelike nature of electrons described by molecular orbitals and attempts to pack that information back into a picture chemists have loved for centuries: atoms as individual entities with specific properties, like charge.
Imagine you have just calculated the wavefunction for an ammonia molecule, . You have a complex mathematical object, but your chemist's intuition screams that the nitrogen atom, being more electronegative, should be slightly negative and the hydrogen atoms slightly positive. Mulliken analysis gives this intuition a number. By subtracting the "gross atomic population"—the number of electrons assigned to the nitrogen atom—from its nuclear charge, we get a net atomic charge. For instance, a calculation might tell us that nitrogen has a population of 7.756 electrons. Since a neutral nitrogen nucleus has a charge of , this gives it a net Mulliken charge of . This single number beautifully confirms our chemical intuition.
This is more than just a labeling exercise. It allows us to see quantitative trends where we previously saw only qualitative ones. Consider the series of simple hydrides across the second period: methane (), ammonia (), water (), and hydrogen fluoride (HF). We know that the electronegativity of the central atom increases dramatically from carbon to fluorine. What does this do to the hydrogens? Intuitively, the central atom should pull electron density away from hydrogen more and more strongly along this series.
Mulliken analysis allows us to watch this happen. If we perform a consistent set of calculations on these molecules, we find that the Mulliken charge on the hydrogen atoms becomes progressively more positive as we move from to HF. The analysis captures the systematic stripping of electron density from hydrogen by its increasingly greedy partner. Despite the method's internal simplifications, it successfully reproduces one of the most fundamental trends in the periodic table. The underlying reason is buried in the coefficients of the molecular orbitals. In a polar bond like that in hydrogen fluoride, the bonding molecular orbital is not an equal mix of hydrogen and fluorine atomic orbitals; it is weighted more heavily toward the more electronegative fluorine. The Mulliken recipe, by processing these coefficients and their overlap, translates this orbital asymmetry into a simple, intuitive partial charge.
But the story doesn't end with atomic charges. The heart of the Mulliken method is its treatment of the "overlap population"—the electron density that doesn't belong to any single atom but exists in the shared space between them. Think of it as a joint venture between two atoms. Mulliken's famous (and, as we will see, infamous) decision was to split the proceeds of this joint venture exactly 50/50, regardless of the relative contributions of the partners.
This overlap population itself, however, tells a fascinating tale. Its magnitude is often taken as a measure of the covalent bond order. But even more interestingly, its sign speaks volumes. In a typical covalent bond, the atomic orbitals overlap constructively, leading to a build-up of electron density between the nuclei and a positive overlap population.
What if it's negative? Consider the two hydrogen atoms in a water molecule. They are not directly bonded to each other. A Mulliken analysis often reveals a small, negative overlap population between them. This is not a mistake. It's a subtle signature of an antibonding interaction. It tells us that the net effect of the molecular electronic structure is a slight depletion of electron density in the region between the two hydrogens. They are, in a sense, weakly repelling each other electronically, a direct consequence of the Pauli exclusion principle played out through the molecular orbitals.
This same tool can even give us a peek into the world of magnetism. In a spin-polarized calculation, where 'spin-up' and 'spin-down' electrons are treated separately, we can perform a Mulliken analysis on each set. The result is a "spin population" on each atom, which is simply the number of spin-up electrons minus the number of spin-down electrons. This value is a direct computational estimate of the number of unpaired electrons on that atom. For a transition metal ion in a crystal, like manganese in a perovskite oxide, this allows us to compare the result of a complex Density Functional Theory calculation directly against the simple predictions of Hund's rule, connecting the most sophisticated computational methods to the back-of-the-envelope sketches of introductory chemistry.
Up to now, we have painted a rather rosy picture. Mulliken analysis seems like a wonderfully versatile tool. But here we must pause and think like physicists. We must always be suspicious of our assumptions. And the Mulliken method rests on an assumption that is as simple as it is troubling: the 50/50 split of the overlap population.
Is this fair? Is it physical? If an electron is in a region of overlap between a highly electronegative fluorine atom and a hydrogen atom, should its density really be divided equally between them? The answer, of course, is no. This arbitrary division is the method's Achilles' heel. For bonds between atoms of similar electronegativity, it's a reasonable approximation. But for highly polar or ionic bonds, it can be terribly misleading.
Consider zinc oxide (), a material with significant ionic character. A Mulliken analysis might assign the zinc atom a charge of around . This suggests a largely covalent picture. But other, more sophisticated methods, like the Quantum Theory of Atoms in Molecules (QTAIM) which partitions electron density based on its actual topological features, might yield a charge of for the very same system!. This is not a minor disagreement; it is a completely different physical picture. The QTAIM result suggests a highly ionic bond, which aligns better with many of ZnO's properties. The failure of Mulliken analysis here is a direct result of its enforced 50/50 split, which systematically underestimates charge separation in polar systems.
This is not the only problem. The results of a Mulliken analysis are notoriously sensitive to the choice of the atomic orbital basis set used in the calculation—a purely mathematical construct with no direct physical meaning. Adding diffuse, spread-out functions to the basis set can cause Mulliken's accounting scheme to assign electrons to atoms that are far away, leading to nonsensical results. Other methods, such as Löwdin population analysis, were developed to remedy some of these issues by first transforming the basis functions into an orthogonal set, but the fundamental ambiguity of partitioning the wavefunction remains.
This brings us to a deeper question: what do we want atomic charges for in the first place? Often, the goal is to build simplified models—or "force fields"—that can predict how large molecules like proteins or materials will behave, without having to solve the Schrödinger equation every time. In these models, the electrostatic interaction between molecules is governed by the electric field one molecule creates in the space around it.
Therefore, a "good" set of atomic charges is one that correctly reproduces this external electrostatic potential (ESP). This realization led to a completely different philosophy. Instead of partitioning the wavefunction, why not work backward? We can calculate the "true" electrostatic potential around a molecule from quantum mechanics, and then find the set of atom-centered point charges that best reproduces this potential. This is the idea behind ESP-derived charge models like RESP (Restrained Electrostatic Potential).
These charges are, by construction, better suited for describing intermolecular interactions. They are also generally less sensitive to the choice of basis set. For these reasons, methods like RESP have become the gold standard for developing the force fields used in modern drug discovery and materials simulation.
So, where does this leave our old friend, Mulliken analysis? It remains a concept of immense pedagogical value. It is often the first method students learn, providing a tangible link between the quantum mechanical wavefunction and chemical intuition. It can still provide useful qualitative insights and reveal trends. But in the world of modern, high-precision computational science, it serves as a foundational stepping stone rather than a final destination. It taught us what questions to ask and revealed the pitfalls of overly simple answers, paving the way for the more physically robust and predictive tools we use today.