try ai
Popular Science
Edit
Share
Feedback
  • Synaptic Conductance

Synaptic Conductance

SciencePediaSciencePedia
Key Takeaways
  • Synaptic conductance determines ion flow not as a fixed current, but by pulling the membrane potential towards a specific reversal potential (ErevE_{\text{rev}}Erev​).
  • A synapse's function (excitatory vs. inhibitory) is defined by whether its reversal potential is above or below the action potential threshold, leading to concepts like shunting inhibition.
  • The interaction of conductances leads to complex, non-linear computations like sublinear summation and coincidence detection via NMDA receptors.
  • The constant synaptic background activity in the brain creates a high-conductance state, which shortens the neuron's response time and makes it a faster, more precise processor.

Introduction

How does the brain think? This profound question begins not with abstract logic, but with the concrete physics of electricity. While we often simplify neurons into simple adders and subtractors of signals, this view misses the rich, dynamic computational power at their core. The true engine of neural processing is ​​synaptic conductance​​—the variable ease with which ions flow across the synaptic membrane. This article addresses the gap between the simple "on/off" model of a synapse and the sophisticated biophysical reality. By understanding conductance, we can unlock the mechanisms behind everything from simple reflexes to complex learning.

This exploration is divided into two parts. In the first chapter, ​​Principles and Mechanisms​​, we will dissect the fundamental electrical laws that govern synaptic action, revealing how concepts like reversal potential and shunting inhibition create a powerful computational toolkit. In the second chapter, ​​Applications and Interdisciplinary Connections​​, we will see how these principles are applied across the nervous system to enable decision-making, learning, and stable, efficient brain function. We begin by examining the electrical soul of the synapse itself.

Principles and Mechanisms

The Electrical Soul of the Synapse

After our brief introduction, you might be left wondering what a synapse really is at its core. You've heard of chemical messengers, but how does a puff of chemicals translate into a thought, a feeling, or a muscle twitch? The answer, as is so often the case in nature, is beautifully simple in its principle, relying on the same laws of electricity that power your phone.

At its heart, a postsynaptic terminal—the "receiving" end of a synapse—acts like a tiny electrical circuit. When neurotransmitters bind to receptors, they don't do anything mystical. They simply open or close tiny gates, or ​​ion channels​​. The opening of these channels creates a ​​synaptic conductance​​, which we'll call gsyng_{\text{syn}}gsyn​. Conductance is just the inverse of resistance; it’s a measure of how easily electrical charge can flow. A large conductance means a wide-open gate for ions.

The flow of ions—the synaptic current, IsynI_{\text{syn}}Isyn​—is then governed by a wonderfully elegant relationship, which is really just a version of Ohm's Law you might have learned in high school physics:

Isyn=gsyn(V−Erev)I_{\text{syn}} = g_{\text{syn}}(V - E_{\text{rev}})Isyn​=gsyn​(V−Erev​)

Let's take a moment to appreciate this little equation. It tells us everything. The current is the product of two things: the conductance gsyng_{\text{syn}}gsyn​ (how open the gate is) and a term (V−Erev)(V - E_{\text{rev}})(V−Erev​) called the ​​driving force​​. The driving force is the difference between the neuron's current membrane potential, VVV, and a special voltage called the ​​reversal potential​​, ErevE_{\text{rev}}Erev​.

What is this ErevE_{\text{rev}}Erev​? You can think of it as the "happy place" for that particular type of synapse. It's the unique membrane voltage at which, even if the channel is wide open, there is no net flow of ions. The electrical and chemical gradients are perfectly balanced. Because of this, whenever a synaptic conductance turns on, it doesn't just inject a fixed current; it tries to pull the membrane potential VVV towards its own reversal potential ErevE_{\text{rev}}Erev​. If the membrane potential is below ErevE_{\text{rev}}Erev​, opening the channel causes a current that pushes the voltage up. If the potential is above ErevE_{\text{rev}}Erev​, the current flows in the opposite direction, pulling the voltage down. The reversal potential is the synapse's ultimate destination for the neuron's voltage. This simple fact is the key to understanding everything that follows.

Excitatory or Inhibitory? Re-examining the Labels

You were probably taught that excitatory synapses cause the voltage to go up (depolarize) and inhibitory synapses cause it to go down (hyperpolarize). This is a useful first approximation, but it's not the whole story, and the truth is far more interesting.

The real functional definition of ​​excitatory​​ versus ​​inhibitory​​ is simple: does activating the synapse make the neuron more or less likely to fire an action potential? An action potential is triggered when the membrane potential crosses a certain ​​action potential threshold​​, let's call it VthV_{\text{th}}Vth​.

So, the real question is about the relationship between a synapse's reversal potential, ErevE_{\text{rev}}Erev​, and this threshold, VthV_{\text{th}}Vth​.

  • A synapse is truly ​​excitatory​​ if its reversal potential is above the action potential threshold (Erev>VthE_{\text{rev}} > V_{\text{th}}Erev​>Vth​). When this synapse is active, it pulls the membrane potential towards a value that is guaranteed to be past the point of no return. A typical excitatory glutamate receptor, for example, has an ErevE_{\text{rev}}Erev​ near 000 mV, while VthV_{\text{th}}Vth​ might be around −50-50−50 mV.

  • A synapse is ​​inhibitory​​ if its reversal potential is below the action potential threshold (ErevVthE_{\text{rev}} V_{\text{th}}Erev​Vth​). It pulls the voltage away from the threshold, or clamps it at a value below threshold, making it harder for the neuron to fire. A typical inhibitory GABA receptor might have an ErevE_{\text{rev}}Erev​ near −70-70−70 mV.

This new perspective leads to a wonderful paradox. Imagine a synapse whose reversal potential is, say, −60-60−60 mV. The neuron's resting potential is −65-65−65 mV, and its threshold is −50-50−50 mV. What happens when this synapse opens? It will pull the voltage up from −65-65−65 mV towards −60-60−60 mV—a depolarization! By the old definition, this is excitatory. But functionally, it's inhibitory! By clamping the voltage at −60-60−60 mV, it makes it much harder for other excitatory inputs to push the potential all the way up to the −50-50−50 mV threshold. This effect is one of the most subtle and powerful tools in the neuron's computational toolkit: shunting inhibition.

The Art of Shunting: How to Inhibit by Adding

This brings us to one of the most counter-intuitive ideas in neuroscience: ​​shunting inhibition​​. It's a way for a neuron to perform a kind of division, not just subtraction. Consider a synapse with a reversal potential (Erev=−70E_{\text{rev}} = -70Erev​=−70 mV) very close to the neuron's resting potential (EL=−65E_L = -65EL​=−65 mV). Under voltage clamp, where we hold the voltage at rest, activating this synapse creates only a tiny outward current because the driving force (V−ErevV - E_{\text{rev}}V−Erev​) is very small. In a freely-moving neuron at rest, it would cause a minuscule hyperpolarization. It seems pretty useless, doesn't it?

But its main effect is not changing the voltage directly. Its power lies in changing the conductance. When this inhibitory synapse opens, it dramatically increases the total conductance of the membrane. Think of the neuron as a leaky bucket you're trying to fill with water (excitatory current). Shunting inhibition doesn't remove water from the bucket; it punches a big hole in its side. Now, when you try to fill the bucket, most of the water just "shunts" out the side, and the water level (the voltage) barely rises.

Let's look at the numbers from a hypothetical scenario. An injected current that depolarizes a neuron by 555 mV on its own might only manage a ≈3.3\approx 3.3≈3.3 mV depolarization when a shunting synapse is active. The shunting synapse didn't produce a large negative voltage, but it effectively 'divided' the impact of the excitatory input. This is a crucial distinction from a simple current-based model of a neuron, where inputs would just add and subtract. The ability of a neuron to change its own properties—specifically its total conductance—is a central feature of its computational power.

A Nonlinear Symphony: Why 1 + 1 Is Often Less Than 2

If a single synapse can do such complex things, what happens when dozens or hundreds are active at once? If neurons were simple "adders," where inputs are just currents that sum up, then the response to two inputs would be the sum of their individual responses. This is called linear summation. But a real neuron, with its conductance-based synapses, is much more sophisticated. The summation of excitatory inputs is typically ​​sublinear​​.

Imagine two identical excitatory synapses firing at the same time on a small patch of dendrite. The voltage change will be less than twice the voltage change produced by one synapse alone. Why? For two beautiful and interconnected reasons that stem directly from our fundamental equation, Isyn=gsyn(V−Erev)I_{\text{syn}} = g_{\text{syn}}(V - E_{\text{rev}})Isyn​=gsyn​(V−Erev​).

  1. ​​Driving Force Reduction​​: The first synapse opens its channels and begins to depolarize the membrane, pushing VVV towards its excitatory reversal potential, EeE_{\text{e}}Ee​. When the second synapse opens a fraction of a moment later, the membrane potential VVV is already higher than it was at rest. This means the driving force, (V−Ee)(V - E_{\text{e}})(V−Ee​), for the second synapse is smaller. It's like trying to push a car that's already moving; your push adds less to its final speed. With a smaller driving force, the second synapse injects less current than the first one did.

  2. ​​Shunting​​: Just as we saw with inhibition, the very act of opening excitatory channels increases the total conductance of the membrane. The channels from the first synapse create a "shunt" that diminishes the voltage change produced by the current from the second synapse. Each synapse, by opening, makes the neuron a leakier bucket, reducing the impact of all its neighbors.

In a scenario with 20 coincident excitatory synapses, the membrane doesn't depolarize 20 times as much as with one. Instead of soaring from −70-70−70 mV towards 000 mV, the potential might saturate at a value like −6.4-6.4−6.4 mV. This saturation is a fundamental feature of neuronal computation, preventing the system from becoming over-excited and allowing for a graded, controlled response to a wide range of input strengths.

The Coincidence Detector: A "Smarter" Synaptic Gate

So far, our synaptic conductances, gsyng_{\text{syn}}gsyn​, have been simple: they turn on when a neurotransmitter arrives. But what if the gate itself were more intelligent? What if its ability to open depended not just on the neurotransmitter, but also on the state of the neuron itself?

Enter the ​​N-methyl-D-aspartate (NMDA) receptor​​, a true marvel of molecular engineering. Like other glutamate receptors, it opens a channel when glutamate binds. But it has a trick up its sleeve: at resting membrane potentials, the channel is physically plugged by a magnesium ion (Mg2+\text{Mg}^{2+}Mg2+). The key (glutamate) is in the lock, but a bolt (magnesium) is holding the door shut.

The bolt is only removed when the neuron is already depolarized, usually by the action of other nearby excitatory synapses. So, for the NMDA receptor to conduct a significant current, two things must happen at once: presynaptic glutamate release and postsynaptic depolarization. It is a true ​​coincidence detector​​.

Let's imagine the numbers. At a resting potential of −65-65−65 mV, even with glutamate present, the Mg2+\text{Mg}^{2+}Mg2+ block might be so effective that the channel operates at only 10%10\%10% of its potential conductance. But if other inputs depolarize the cell to −30-30−30 mV, the Mg2+\text{Mg}^{2+}Mg2+ plug is expelled, and the conductance might jump to 60%60\%60% of its maximum. Even though the driving force is smaller at −30-30−30 mV than at −65-65−65 mV, the six-fold increase in conductance more than compensates, leading to a much larger influx of positive ions. The result is a highly nonlinear, even ​​supra-linear​​, response where the whole is much greater than the sum of its parts. This mechanism is thought to be a cornerstone of learning and memory, allowing the neuron to strengthen connections that are active together.

Life in the Crowd: The High-Conductance State

In a textbook, we often picture a neuron sitting at a quiet, stable resting potential, waiting for an input. The reality of a neuron in a living brain is anything but quiet. It's more like being in the middle of a bustling Grand Central Station, with a constant roar of activity. Neurons are perpetually bombarded by thousands of excitatory and inhibitory synaptic inputs.

This constant synaptic barrage forces the neuron into what is known as a ​​high-conductance state​​. The total membrane conductance, which is the sum of the leak conductance and all the tiny, flickering synaptic conductances, is much higher than the leak conductance alone. This has profound consequences for how the neuron behaves.

First, the ​​input resistance​​ (Rin=1/gtotalR_{\text{in}} = 1/g_{\text{total}}Rin​=1/gtotal​) plummets. The neuron becomes much less sensitive to any single small input. An EPSP that would have caused a noticeable ripple in a quiet neuron is now just a drop in a turbulent ocean.

Second, the effective ​​membrane time constant​​ (τeff=Cm/gtotal\tau_{\text{eff}} = C_m / g_{\text{total}}τeff​=Cm​/gtotal​) becomes much shorter. The time constant is a measure of how long the neuron "remembers" an input. In a quiet state, a long time constant allows the neuron to sum inputs over a long window. In a high-conductance state, the short time constant means the neuron has a very short memory. The voltage changes much more rapidly, and the neuron becomes a fast-responding device that integrates inputs over very brief time windows. This state transforms the neuron from a sluggish integrator into an agile coincidence detector, responding faithfully to the rapid ebb and flow of information in the network.

Location, Location, Location: Computation in the Dendritic Tree

Finally, we must abandon the fiction that a neuron is a single point, or compartment. Neurons possess magnificent, branching dendritic trees that can stretch for millimeters. And where a synapse makes its connection on this tree is of paramount importance to its function.

We can think of a dendrite as a leaky electrical cable. As a voltage signal—an EPSP—travels along this cable towards the soma, it is subject to two effects:

  1. ​​Attenuation​​: The signal gets weaker with distance. This happens because current leaks out through the membrane along the way.
  2. ​​Temporal Filtering​​: The signal gets "smeared out" in time. The sharp peak of the EPSP is broadened, and its rise time becomes slower. This is because the dendritic cable acts as a low-pass filter, preferentially cutting out the high-frequency (fast) components of the signal.

Both of these effects increase with ​​electrotonic distance​​, L=x/λL = x/\lambdaL=x/λ, a functional measure that normalizes the physical distance (xxx) by the dendrite's length constant (λ\lambdaλ). A synapse on a thin, leaky branch might be electronically "distal" even if it is physically close to the soma.

This creates a fascinating computational trade-off. A ​​proximal​​ synapse, close to the soma, delivers a large, sharp, and rapid EPSP. It has a powerful, immediate impact. A ​​distal​​ synapse, far out on the dendritic tree, delivers an EPSP that is small and slow by the time it reaches the soma. It has been attenuated and temporally smeared.

At first glance, the distal synapse seems inferior. But its slowness is its secret weapon. Because its resulting somatic EPSP is so broad and long-lasting, it creates a prolonged window of opportunity for other inputs to summate with it. Distal inputs are therefore exceptionally good at ​​temporal summation​​, providing a sustained "enabling" depolarization that allows later, sharper inputs to push the neuron over its firing threshold. In this way, the very morphology of the neuron becomes an integral part of its computational algorithm, allowing it to weigh and integrate information across both space and time in a complex dance of electricity and form.

Applications and Interdisciplinary Connections

We have spent some time exploring the fundamental physics of synaptic conductance—the opening and closing of tiny pores that allow ions to flow across a neuron's membrane. You might be tempted to think of this as just the low-level plumbing of the brain. But that would be like looking at the individual dots of a pointillist painting and missing the masterpiece. The true magic, the very substance of computation and behavior, arises from how this simple principle is orchestrated across space, time, and function. Let us now step back and admire the gallery of applications, to see how the humble synaptic conductance builds the architecture of the mind.

The Art of the Decision: A Cellular Tug-of-War

At its heart, a neuron is a decision-making device. Every moment, it weighs the cacophony of incoming signals and decides: to fire, or not to fire. This is not a democratic vote; it's a physical tug-of-war, and the rope is the membrane potential. Imagine a cockroach, peacefully resting. Suddenly, a puff of air—a predator's breath—brushes against the sensory hairs on its abdomen. Within milliseconds, the cockroach has turned and fled. This life-or-death decision is made not by a conscious brain, but by a handful of giant interneurons that must instantly integrate the "danger" signals.

This neural arbitration is a perfect illustration of competing conductances. Excitatory synapses, nudging the potential towards the firing threshold, open their channels. Simultaneously, inhibitory synapses pull in the opposite direction. The final membrane potential becomes a beautifully simple, weighted average of the reversal potentials for all the open channels, with the conductances themselves serving as the weights. The neuron sums the evidence, and if the final voltage crosses the threshold, an escape command is issued. This is not a metaphor; it is a physical calculation performed by the cell's membrane, a rapid summation of conductances that determines survival.

But here, nature throws us a wonderful curveball. You might think that a neurotransmitter like GABA is always inhibitory. It acts as the "brake" in the adult brain. Yet, in the developing brain of an infant mammal, the very same GABAergic synapse can be excitatory. How can this be? The answer lies not in the transmitter or the receptor, but in the internal environment of the young neuron. Due to a different regulation of ion pumps, the intracellular chloride concentration is high, shifting the reversal potential for chloride to a less negative value. Activating a GABA receptor in this context actually causes a depolarization!. This reveals a profound principle: the "meaning" of a synaptic conductance is not absolute. It is exquisitely context-dependent, a function of the neuron's own internal state. The rules of the game can change.

The Computational Landscape: Location, Location, Location

So far, we have treated the neuron as a simple ball. But the majestic beauty of a typical neuron, like a cortical pyramidal cell, lies in its sprawling dendritic tree. This is not just wiring; it is a vast computational landscape. Where a synapse makes its connection is as important as what kind of connection it is.

Consider the axon initial segment (AIS), a tiny, specialized patch of membrane where the axon sprouts from the cell body. This is the point of no return. It is here that the final decision to generate an action potential is made. Now, imagine a specialized type of inhibitory neuron, the chandelier cell, which makes synapses exclusively onto this critical location. When these synapses become active, they don't necessarily hyperpolarize the cell. Instead, by opening chloride channels with a reversal potential near the resting potential, they dramatically increase the total conductance of the AIS. This is called ​​shunting inhibition​​. It acts like opening a hole in a garden hose; it doesn't reverse the flow, but it drastically reduces the pressure (voltage) that can build up from other inputs. This massive increase in local conductance means a much larger excitatory current is required to reach the spike threshold. It is a powerful and precise veto power placed at the most strategic point of the neuron.

While some inputs are positioned to provide ultimate control, others are arranged to produce something more than simple addition. When a cluster of excitatory synapses on a distant dendritic branch are activated together, they can do something extraordinary. If their combined local depolarization is strong enough, it can cross a local threshold and trigger a ​​dendritic spike​​—a wave of electrical activity that is not a full-blown action potential but a powerful, regenerative signal that sweeps down the dendrite to the soma. This is a form of non-linear summation. Ten synapses firing together can produce a somatic response far greater than ten times the response to a single synapse. This implies that dendrites are not passive collectors of information. They are active processing units, capable of performing local computations on their inputs before the a final verdict is reached at the soma. The neuron is not a single calculator; it's a distributed computing network.

The Dynamic Brain: Learning, Tuning, and Stability

Synaptic conductances are not etched in stone. They are dynamic, plastic, and constantly changing in response to experience. This is the physical basis of learning and memory. The most famous examples are Long-Term Potentiation (LTP) and Long-Term Depression (LTD).

In the most straightforward case, LTP strengthens a synapse by, for instance, increasing the number of AMPA receptors. This increases its peak conductance. When a group of synapses are active, the potentiated ones now have a "louder voice" in the neuronal conversation, making it more likely the neuron will fire in response to that specific pattern of input. This is the cellular alphabet of memory formation.

But plasticity can be far more subtle and computationally profound. Consider a Purkinje cell in the cerebellum, a region crucial for motor learning. These cells receive thousands of inputs. If a subset of these synapses undergoes LTD, their individual conductances are weakened. What is the computational result? A wonderfully nuanced change in the cell's "tuning." Before LTD, the cell might respond strongly to a large, synchronous volley of inputs. After LTD, the non-linear saturation effect at the dendrite is reduced. Paradoxically, this can make the cell's output more sensitive to the number of active inputs, even as each input is weaker. The neuron has not just turned down the volume; it has changed how it processes temporal patterns of information. It has retuned itself to listen to the conversation differently.

With all this potentiation and depression, you might wonder why the brain's activity doesn't either spiral into silence or explode into a storm of seizures. Neurons employ homeostatic plasticity, a set of mechanisms that globally adjust synaptic strengths to maintain a stable level of activity. For example, a neuron might scale up all its excitatory synaptic conductances in response to prolonged deprivation. But this has a fascinating side effect. As the total possible synaptic conductance increases, the dendritic membrane is pushed more easily into a ​​saturating​​ regime. The local voltage gets closer to the synaptic reversal potential, meaning that each additional bit of conductance produces less and less additional current. The neuron's input-output function becomes highly non-linear. Doubling the input strength no longer doubles the output. This elegant feedback mechanism keeps the neuron stable while fundamentally altering its computational properties.

From Cells to Systems: Emergent Order and a Brain in Action

The principles of synaptic conductance scale up, creating elegant solutions to system-level problems and defining the very nature of brain function.

One of the most beautiful examples is ​​Henneman's size principle​​ in motor control. When you lift a feather, your brain recruits a small number of tiny motor neurons. When you lift a heavy weight, it recruits those same small neurons plus a legion of larger ones. This orderly recruitment, from small to large, happens automatically, without a central controller micromanaging every neuron. Why? The answer is pure physics. Smaller motor neurons have less surface area, and thus a higher input resistance (Rin∝1/SR_{\text{in}} \propto 1/SRin​∝1/S). By Ohm's Law, ΔV=IsynRin\Delta V = I_{\text{syn}} R_{\text{in}}ΔV=Isyn​Rin​. For a given amount of synaptic current IsynI_{\text{syn}}Isyn​ from the descending command pathways, the smaller neuron experiences a larger voltage change. It will always reach the firing threshold first. As the command signal strengthens, progressively larger neurons are brought online. Crucially, small motor neurons innervate fatigue-resistant muscle fibers, while large ones innervate powerful but easily fatigued fibers. This simple physical principle ensures that for any task, the body automatically uses its most energy-efficient, fatigue-resistant resources first. It is a system of breathtaking efficiency, an emergent property born from the scaling of conductance with cell size.

Finally, what is a neuron's life really like in the awake, thinking brain? It is not a quiet existence, waiting for a signal. It is a constant, roiling storm of synaptic inputs, a condition known as the ​​high-conductance state​​. This is not mere "noise." It is a fundamental mode of brain operation. This background barrage of excitatory and inhibitory conductances dramatically increases the total membrane conductance, which in turn radically decreases the membrane time constant (τm=Cm/gtotal\tau_m = C_m / g_{\text{total}}τm​=Cm​/gtotal​). What is the consequence? The neuron becomes faster and more precise. With a shorter time constant, it integrates inputs over a much briefer window. It can follow rapid fluctuations in its input that a "quiet" neuron would simply smooth over. Spike timing, once thought to be a noisy affair, becomes a more precise and meaningful variable. We can study this directly using techniques like dynamic clamp, where we can synthetically inject these fluctuating conductances into a neuron and observe how its responsiveness is sharpened. The high-conductance state transforms the neuron from a sluggish integrator into a nimble, fast-responding processor, perfectly adapted for the real-time demands of cognition.

From the lightning-fast reflex of an insect to the graceful control of our own bodies, from the shaping of memory to the very texture of the brain's background activity, the story of the nervous system is written in the language of synaptic conductance. It is a language of sublime complexity and emergent simplicity, a testament to how the elegant laws of physics can give rise to the richness of behavior and thought.