try ai
Popular Science
Edit
Share
Feedback
  • Conductance-Based Model

Conductance-Based Model

SciencePediaSciencePedia
Key Takeaways
  • Unlike simpler current-based models, conductance-based models treat synapses as variable resistors that change the neuron's intrinsic properties.
  • This framework naturally explains powerful computational primitives like divisive shunting inhibition and sub-linear input summation (saturation).
  • The Hodgkin-Huxley model, a quintessential conductance-based model, uses voltage-gated conductances to explain the generation of action potentials.
  • These models are essential for understanding network dynamics like the high-conductance state, modeling neurological diseases, and designing bio-inspired neuromorphic hardware.

Introduction

To decode the brain's computational language, we must first build accurate models of its fundamental units: the neurons. For decades, neuroscientists have sought to capture the electrical behavior of these intricate cells in mathematical form. While simple models provide valuable intuition, they often overlook the nuanced physics that give neurons their immense computational power. This raises a critical question: what is the most biophysically faithful way to model synaptic input, and what profound computational capabilities does this realism unveil?

This article journeys into the heart of modern computational neuroscience by exploring the conductance-based model, a framework that describes neural activity with remarkable accuracy. In the first chapter, ​​Principles and Mechanisms​​, we will dissect the core concept of ionic conductance, contrast it with the simpler current-based approach, and uncover the powerful consequences of this distinction, including shunting inhibition and the generation of action potentials. Subsequently, in ​​Applications and Interdisciplinary Connections​​, we will see these principles in action, exploring how conductance-based models provide deep insights into everything from single-neuron behavior and network dynamics to the mechanisms of disease and the design of next-generation, brain-inspired computers.

Principles and Mechanisms

To understand how a neuron computes, we must first understand the physics of its membrane. At its heart, a neuron is a tiny, intricate electrical device. It’s a bag made of a fatty membrane, separating a salty intracellular fluid from a salty extracellular one. The different concentrations of ions like sodium (Na+\text{Na}^+Na+), potassium (K+\text{K}^+K+), and chloride (Cl−\text{Cl}^-Cl−) on either side of this membrane create a small voltage difference, much like a tiny battery. This is the ​​resting membrane potential​​.

However, this membrane isn't a perfect insulator. It’s leaky. It’s studded with protein channels that are always open, allowing a small, steady trickle of ions to flow across. Electrically, we can think of the membrane as a capacitor (CmC_mCm​)—it can store charge—in parallel with a resistor (its leakiness, or ​​leak conductance​​ gLg_LgL​) and a battery (the ​​leak reversal potential​​ ELE_LEL​, which is close to the resting potential). The fundamental equation for this passive, resting neuron is a simple statement of charge conservation: the current needed to change the voltage stored on the capacitor must equal the current flowing through the leaks.

CmdVdt=−gL(V−EL)C_m \frac{dV}{dt} = -g_L(V - E_L)Cm​dtdV​=−gL​(V−EL​)

This simple setup describes a neuron just sitting there. But the magic of the brain lies in communication. How does one neuron talk to another? It does so through synapses. Our journey is to understand how we model the electrical effect of a synapse, and to see why the right way of modeling it has profound consequences for computation.

A Simple Idea: Synapses as Current Injectors

Let’s start with the most straightforward idea. What if a synapse is just like a tiny needle that injects a puff of electrical current into the postsynaptic neuron? This is the essence of a ​​current-based model​​. The incoming signal causes a synaptic current, Isyn(t)I_{syn}(t)Isyn​(t), to be added to our equation:

CmdVdt=−gL(V−EL)+Isyn(t)C_m \frac{dV}{dt} = -g_L(V - E_L) + I_{syn}(t)Cm​dtdV​=−gL​(V−EL​)+Isyn​(t)

This model is appealingly simple. The effect of the synapse is a pure addition. It doesn’t matter what the neuron is currently doing; an input of a certain size always provides the same "push". A crucial consequence is that the neuron's intrinsic properties remain unchanged. The total conductance is still just gLg_LgL​, so its characteristic response time—the ​​membrane time constant​​ τm=Cm/gL\tau_m = C_m/g_Lτm​=Cm​/gL​—is fixed, regardless of how much synaptic input it receives.

While simple and useful for some applications, this model misses a deep and beautiful truth about how synapses actually work. It’s like describing a conversation as one person simply shouting louder, ignoring the fact that the listener’s state of mind affects how they interpret the words.

The Real Mechanism: Synapses as Variable Resistors

Nature’s solution is far more elegant. A synapse isn't a current injector. It is a tiny, molecular gate—an ion channel—that opens in response to a neurotransmitter. When it opens, it doesn't create current out of thin air; it momentarily changes the membrane's resistance, or rather its ​​conductance​​ (which is just the inverse of resistance), to a specific type of ion.

To understand the current that flows, we must ask: why would an ion want to move in the first place? The answer lies in thermodynamics. Each ion is subject to two forces: a chemical force due to its concentration difference across the membrane, and an electrical force due to the membrane potential. There exists a unique voltage for each ion where these two forces perfectly balance, and there is no net flow. This voltage is called the ​​Nernst potential​​, or the ion's equilibrium potential, EionE_{\text{ion}}Eion​. You can think of EionE_{\text{ion}}Eion​ as the ion's "happy place"—the voltage the membrane would have if it were only permeable to that ion.

The actual membrane potential, VVV, is rarely at an ion's happy place. The difference between the two, (V−Eion)(V - E_{\text{ion}})(V−Eion​), is called the ​​driving force​​. It is this voltage difference that pushes ions through any open channel, just as a pressure difference pushes water through a pipe. The resulting current is simply given by a version of Ohm's Law:

Iion=gion(V−Eion)I_{\text{ion}} = g_{\text{ion}} (V - E_{\text{ion}})Iion​=gion​(V−Eion​)

Here, giong_{\text{ion}}gion​ is the conductance of the open channels for that ion. A ​​conductance-based synapse​​ is therefore modeled as an additional, time-varying conductance gsyn(t)g_{syn}(t)gsyn​(t) with its own characteristic reversal potential EsynE_{syn}Esyn​. Our membrane equation becomes:

CmdVdt=−gL(V−EL)−gsyn(t)(V−Esyn)C_m \frac{dV}{dt} = -g_L(V - E_L) - g_{syn}(t)(V - E_{syn})Cm​dtdV​=−gL​(V−EL​)−gsyn​(t)(V−Esyn​)

(Note: by convention, we often write the current as gsyn(Esyn−V)g_{syn}(E_{syn} - V)gsyn​(Esyn​−V) so that an inward, depolarizing current is positive). This small change—making the synaptic current depend on the membrane voltage VVV—is not just a minor correction. It fundamentally alters the computational properties of the neuron.

The Profound Consequences of Conductance

Let’s rearrange the conductance-based equation to see what’s really going on. By gathering the terms involving VVV, we get:

CmdVdt=−(gL+gsyn(t))V+(gLEL+gsyn(t)Esyn)C_m \frac{dV}{dt} = -(g_L + g_{syn}(t))V + (g_L E_L + g_{syn}(t)E_{syn})Cm​dtdV​=−(gL​+gsyn​(t))V+(gL​EL​+gsyn​(t)Esyn​)

Notice something remarkable. The term multiplying VVV is now (gL+gsyn(t))(g_L + g_{syn}(t))(gL​+gsyn​(t)). The synapse doesn’t just add a current; it adds its own conductance to the neuron’s total conductance. This has two immediate and powerful consequences.

Shunting: The Art of Division

First, the ​​effective membrane time constant​​ is no longer fixed. It becomes τeff(t)=Cm/(gL+gsyn(t))\tau_{\text{eff}}(t) = C_m / (g_L + g_{syn}(t))τeff​(t)=Cm​/(gL​+gsyn​(t)). Every time a synapse opens, it increases the total conductance, making the neuron "leakier" and causing its time constant to become shorter. The neuron becomes faster and more responsive, forgetting its past state more quickly. In the active cerebral cortex, neurons are constantly bombarded with balanced excitatory and inhibitory inputs, creating a ​​high-conductance state​​. In this regime, the effective time constant can be dramatically shortened—for instance, from 20 ms20\,\mathrm{ms}20ms at rest to under 3 ms3\,\mathrm{ms}3ms during activity—making the neuron a much faster integrator of information.

This dynamic change in conductance enables a subtle but powerful form of computation called ​​shunting inhibition​​. Consider an inhibitory synapse whose reversal potential EinhE_{inh}Einh​ is very close to the neuron's resting potential. When it opens, it causes almost no change in voltage by itself. It seems to do nothing. But it has added its conductance to the membrane. Now, if an excitatory input arrives simultaneously, it finds a much leakier membrane. The excitatory current is "shunted" away through the open inhibitory channels.

Imagine a neuron with a leak conductance of 10 nS10\,\mathrm{nS}10nS. An excitatory input current of 0.2 nA0.2\,\mathrm{nA}0.2nA would cause a steady depolarization of 20 mV20\,\mathrm{mV}20mV (from Ohm's law, ΔV=I/g\Delta V = I/gΔV=I/g). Now, activate a shunting inhibitory synapse that adds another 10 nS10\,\mathrm{nS}10nS of conductance. The total conductance is now 20 nS20\,\mathrm{nS}20nS. The same 0.2 nA0.2\,\mathrm{nA}0.2nA excitatory input now only produces a 10 mV10\,\mathrm{mV}10mV depolarization. The shunting input has, in effect, divided the excitatory signal by two. This is not subtraction; it's a divisive, multiplicative interaction, a far more powerful computational primitive than simple addition. An inhibitory postsynaptic potential (IPSP) doesn't have to be a hyperpolarization; it can be a silent suppression of excitation.

Saturation: The Law of Diminishing Returns

Second, the dependence on the driving force (V−Esyn)(V - E_{syn})(V−Esyn​) introduces a natural form of saturation. In a current-based model, two identical inputs produce twice the current and, ideally, twice the voltage response. Summation is linear.

In a conductance-based model, this is not true. When an excitatory synapse opens, it pushes the membrane potential VVV up toward its reversal potential EexcE_{exc}Eexc​ (which is typically around 0 mV0\,\mathrm{mV}0mV). As VVV gets closer to EexcE_{exc}Eexc​, the driving force (V−Eexc)(V - E_{exc})(V−Eexc​) shrinks. The next bit of synaptic conductance that opens will produce less current than the first, because the "desire" of the ions to flow has diminished. This leads to a sub-linear summation of inputs.

This effect is not trivial. Let's compare the depolarization predicted by the two models. Suppose we have a constant synaptic input that, in a conductance-based model, creates a synaptic conductance gˉS\bar{g}_Sgˉ​S​. A naive current-based model might approximate this by injecting a fixed current equal to the initial synaptic current at rest, IˉS=gˉS(Eexc−EL)\bar{I}_S = \bar{g}_S (E_{exc} - E_L)IˉS​=gˉ​S​(Eexc​−EL​). It turns out the current-based model will always overestimate the final depolarization. In the specific case where the synaptic conductance equals the leak conductance (gˉS=gL\bar{g}_S = g_Lgˉ​S​=gL​), the current-based model predicts exactly twice the depolarization of the more realistic conductance-based model. This inherent non-linearity is a fundamental feature of neural integration, preventing inputs from summing to unrealistic levels and contributing to the rich dynamics of computation.

The Masterpiece: Voltage-Gated Conductances and the Action Potential

So far, we have treated the synaptic conductance gsyn(t)g_{syn}(t)gsyn​(t) as an input dictated from the outside. But what if the neuron's own channels had conductances that depended on its own voltage? This is the secret to the most famous signal in neuroscience: the ​​action potential​​, or "spike".

The ​​Hodgkin-Huxley model​​, a monumental achievement in science, is the canonical implementation of this idea. Alan Lloyd Hodgkin and Andrew Fielding Huxley realized that the membrane contains at least two crucial types of voltage-gated channels: a sodium channel that opens rapidly when the neuron is depolarized, and a potassium channel that opens more slowly.

  • The ​​sodium current​​ (INa=gˉNam3h(V−ENa)I_{Na} = \bar{g}_{Na}m^3h(V - E_{Na})INa​=gˉ​Na​m3h(V−ENa​)) provides a rapid positive feedback loop. Depolarization opens sodium channels, sodium ions rush in, causing more depolarization, which opens even more sodium channels. This is the explosive upswing of the action potential.
  • The ​​potassium current​​ (IK=gˉKn4(V−EK)I_K = \bar{g}_{K}n^4(V - E_K)IK​=gˉ​K​n4(V−EK​)) provides a slower, negative feedback. The depolarization also slowly opens potassium channels. Potassium ions rush out, pulling the membrane potential back down and terminating the spike.

The full model is a system of differential equations describing the interplay of these voltage-dependent conductances. It is the quintessential conductance-based model. It not only reproduces the shape of an action potential with stunning accuracy but also explains phenomena like the firing threshold and the refractory period.

This powerful framework allows us to understand the diverse "personalities" of different neurons. By changing the properties of the conductances, we can create models that behave in qualitatively different ways. For example, some neurons begin firing at an arbitrarily low frequency and smoothly increase their rate with more input (Type I excitability, arising from a SNIC bifurcation), while others abruptly jump to a non-zero firing rate and show subthreshold oscillations (Type II excitability, arising from a Hopf bifurcation). The conductance-based framework provides a direct bridge from the biophysics of ion channels to the rich computational dynamics of the cells they constitute. It is the language that allows us to write down the laws of the brain's electrical machinery.

Applications and Interdisciplinary Connections

Having journeyed through the principles and mechanisms of conductance-based models, we now arrive at the most exciting part of our exploration: seeing them in action. A scientific model, no matter how elegant, proves its worth when it leaves the blackboard and helps us understand the real world, make predictions, and even build new technologies. The true beauty of the conductance-based framework is not just in its biophysical fidelity, but in its profound explanatory power. It is a key that unlocks secrets of neural computation, sheds light on the origins of devastating diseases, and inspires the design of a new generation of intelligent machines. Let us now see how these equations, seemingly just a collection of conductances and potentials, breathe life into our understanding of the brain.

The Symphony of a Single Neuron

If we were to model a neuron as a simple bucket that just adds up incoming currents, we would miss the intricate music it plays. A real neuron is a masterful musician, and its instrument is its array of ion channels. Conductance-based models allow us to hear this symphony.

Consider a common phenomenon called spike-frequency adaptation, where a neuron, even under constant stimulation, gradually slows its firing. Experiments show this often happens in distinct phases—a quick initial slowdown followed by a much more gradual one. A simple model struggles to explain this. But a conductance-based model reveals the orchestra at work. It shows us that different ion channels, like different sections of an orchestra, have their own tempo. A potassium channel like the M-current (IMI_MIM​) might activate over tens of milliseconds, producing the fast adaptation. Meanwhile, a different channel, one sensitive to the slow buildup and clearance of intracellular calcium (IAHPI_{AHP}IAHP​), can operate over several seconds, creating the slow adaptation phase. The conductance-based model, by treating these as separate physical entities with their own dynamics, naturally reproduces this complex, multi-timescale behavior. In contrast, simpler models that lump all adaptation into one abstract variable are like trying to represent an entire symphony with a single kazoo. Furthermore, these models, by simulating the actual opening and closing of sodium and potassium channels, faithfully reproduce the beautiful, stereotyped shape of the action potential itself—its sharp rise, swift fall, and subtle after-hyperpolarization—details that are lost in caricature-like "integrate-and-fire" models.

This richness extends to how neurons listen to each other. In simpler models, an "inhibitory" synapse is always inhibitory. It injects a negative current, period. But nature is more subtle. In a conductance-based model, the effect of a synapse depends on the synaptic conductance gsyng_{syn}gsyn​ and the driving force, (V−Esyn)(V - E_{syn})(V−Esyn​). For the brain's main inhibitory neurotransmitter, GABA, the synapse is a channel for chloride ions, and its reversal potential, EGABAE_{GABA}EGABA​, is set by the chloride concentration inside and outside the cell. In a mature, healthy neuron, intracellular chloride is low, making EGABAE_{GABA}EGABA​ very negative (say, −80 mV-80\,\mathrm{mV}−80mV). If the neuron's membrane potential VVV is at −65 mV-65\,\mathrm{mV}−65mV, opening the channel causes chloride to rush in, hyperpolarizing the cell—classic inhibition.

But what if the cell's internal chloride concentration rises? This happens in developing brains and, pathologically, after nerve injury. A conductance-based model, armed with the Nernst equation, predicts that EGABAE_{GABA}EGABA​ will become less negative. If it shifts to, say, −40 mV-40\,\mathrm{mV}−40mV, then activating the synapse when the neuron is at −65 mV-65\,\mathrm{mV}−65mV will cause chloride to leave the cell. This efflux of negative ions depolarizes the neuron! The "inhibitory" synapse now provides an excitatory push. A current-based model, which lacks the concept of a reversal potential, is blind to this dramatic, state-dependent reversal of function. This is not just a theoretical curiosity; it's a fundamental mechanism of brain plasticity and disease.

This voltage-dependence is also the secret to learning. The leading theory of synaptic plasticity, or how connections between neurons strengthen or weaken, hinges on the NMDA receptor. This receptor is a molecular marvel: it requires both a neurotransmitter (glutamate) to bind and for the postsynaptic membrane to be sufficiently depolarized to expel a magnesium ion that plugs its pore. Only then can calcium flow in and trigger the molecular cascades for learning. A conductance-based model captures this beautifully by making the NMDA conductance, gNMDAg_{NMDA}gNMDA​, a function of voltage. Simpler models cannot. To truly simulate how a synapse learns from the precise timing of pre- and postsynaptic spikes (a phenomenon called Spike-Timing-Dependent Plasticity, or STDP), one needs to account for this voltage-dependent coincidence detection. And to get it truly right, multi-compartment conductance-based models are needed to track how electrical signals, like back-propagating action potentials, decay as they travel along the intricate branches of a neuron's dendrites.

The Living Network: From Cells to Computation

When we zoom out from a single neuron to the bustling networks of the living brain, the conductance-based perspective becomes even more crucial. In vivo, a cortical neuron is not sitting in silence, but is constantly bombarded by thousands of excitatory and inhibitory synaptic inputs. This creates what is known as the "high-conductance state."

A simple current-based model would see this bombardment as just a large, noisy input current. But a conductance-based model reveals something much more profound. The thousands of active synapses add their conductances to the neuron's own leak conductance, dramatically increasing the total conductance of the membrane. This has two transformative effects. First, it drastically lowers the neuron's input resistance (Reff=1/gtotalR_{eff} = 1/g_{total}Reff​=1/gtotal​). The neuron becomes "stiffer"—a given input current now produces a much smaller voltage change. This acts as a form of "divisive gain control," making the neuron's output firing rate less sensitive to the absolute magnitude of its input and more sensitive to the relative strength of competing inputs.

Second, the increased conductance shortens the membrane's effective time constant (τeff=Cm/gtotal\tau_{eff} = C_m/g_{total}τeff​=Cm​/gtotal​). The neuron becomes "leakier" and forgets its past inputs more quickly. This transforms it from a sluggish integrator into a nimble coincidence detector, responding preferentially to inputs that arrive in near-synchrony. The neuron's entire computational style is altered by the network context, a phenomenon that only a conductance-based description can capture.

This "divisive normalization" is not an abstract concept; it is a fundamental computation performed throughout the nervous system. Consider how you see the world. Your visual system is brilliant at detecting contrast, allowing you to see the edges of an object regardless of whether you are in bright sunlight or a dim room. A key mechanism for this is found in the retina. Retinal ganglion cells have a "center-surround" receptive field. Light in the center might excite the cell, while light in the surrounding area inhibits it. How does this inhibition work? A beautiful conductance-based model shows that the surround input often activates shunting inhibition, where the inhibitory reversal potential is very close to the neuron's resting potential. This surround input doesn't strongly hyperpolarize the cell, but it adds a large conductance, divisively scaling down the response to the center stimulus. In essence, the neuron's response to the center is being normalized by the activity in the surround. The cell reports not absolute light levels, but local contrast—the very essence of seeing edges. This elegant mechanism, emerging directly from the physics of conductances, is repeated across the brain to perform countless computations.

When the Music Goes Wrong: Models of Disease

The same tools that help us understand healthy computation can provide profound insights into neurological disorders. When the delicate balance of ionic conductances is disturbed, the symphony of the brain can turn into noise.

In Parkinson's disease, patients suffer from debilitating motor symptoms, which are associated with pathological, rhythmic oscillations in the beta frequency range (13−30 Hz13-30\,\mathrm{Hz}13−30Hz) within a brain circuit called the basal ganglia. Where do these rhythms come from? Detailed conductance-based models of the neurons in this circuit, such as those in the Subthalamic Nucleus (STN) and Globus Pallidus (GPe), provide a compelling answer. These models incorporate a zoo of specific ion channels—like the T-type calcium current (ITI_TIT​) that drives rebound bursts after inhibition, and the hyperpolarization-activated current (IHI_HIH​) that acts like an electrical inductor, causing the membrane to "ring" at a preferred frequency. By simulating the intricate dance between these different currents within and between STN and GPe neurons, researchers can show how a network that is normally asynchronous can, upon the loss of dopamine, lock into a state of pathological synchrony, generating the very beta oscillations seen in patients. These models are not just descriptive; they are testbeds for investigating potential therapies that could disrupt these aberrant rhythms.

Similarly, conductance-based models are revolutionizing our understanding of chronic pain. After nerve injury, many people develop neuropathic pain, where even a light touch can be excruciating. This is due to hyperexcitability in the pain pathway. At the level of the primary sensory neuron, models show how the upregulation of certain channels, like persistent sodium channels (INaPI_{NaP}INaP​), and the downregulation of others, like potassium channels (IKI_KIK​), can lower the firing threshold and cause the neurons to fire repetitively, sending a barrage of pain signals. But the problem is compounded in the spinal cord. Here, we see the return of our friend, the depolarizing GABA synapse. Nerve injury can cause dorsal horn neurons to downregulate the KCC2 chloride transporter, leading to a buildup of intracellular chloride. As we saw earlier, this shifts EClE_{Cl}ECl​ positive, turning GABAergic transmission from inhibitory to excitatory. A system designed to gate and control pain signals is hijacked and now amplifies them. By integrating these molecular details into a conductance-based framework, we can build a coherent, quantitative picture of how an injury leads to a chronic disease state.

Building with Biology: Neuromorphic Engineering

The ultimate testament to understanding a principle is the ability to build with it. The elegance and efficiency of the brain's conductance-based computation has inspired a new field of engineering: neuromorphic computing. This field endeavors to build electronic circuits that mimic the brain's architecture and physical principles.

Remarkably, the language of conductance-based models translates directly into the language of analog electronic circuits. A current-based synapse can be built with a simple controlled current source. But to build a true conductance-based synapse, one needs a circuit that performs multiplication. This is elegantly achieved using a subthreshold CMOS circuit called a transconductor (often a differential pair). In this circuit, one input represents the time-varying synaptic conductance (as a controlling current) while two other inputs represent the membrane potential VmV_mVm​ and the reversal potential ErevE_{rev}Erev​. The circuit's output is a current that is naturally proportional to the product of the conductance and the driving force, (Erev−Vm)(E_{rev} - V_m)(Erev​−Vm​). The mathematics of the biological membrane finds a direct, physical analog in silicon.

This pursuit is not without its challenges. The very property that makes these subthreshold circuits so powerful—the exponential dependence of current on voltage—also makes them exquisitely sensitive to temperature and tiny manufacturing imperfections. As engineers build massive, wafer-scale, and even 3D-stacked neuromorphic systems, these variations become a critical problem. A chip might have a "fever," with one region hotter than another, causing the neurons there to run faster. The solution, inspired again by biology, is to build in local compensation and calibration mechanisms, allowing different parts of the chip to adapt to their local conditions.

From the intricate dance of ion channels in a single cell, to the computational motifs of the retina, the pathologies of the diseased brain, and the design of brain-inspired hardware, the conductance-based model is far more than a complex equation. It is a unifying framework, a powerful lens that reveals the deep physical principles underlying the brain's remarkable ability to compute, to learn, and to create. It teaches us that to truly understand the brain, we must appreciate that its neurons are not just adding up numbers, but are performing a rich and dynamic symphony of physics.