try ai
Popular Science
Edit
Share
Feedback
  • Internal Resistance

Internal Resistance

SciencePediaSciencePedia
Key Takeaways
  • Every real power source has internal resistance, which causes the usable terminal voltage to decrease as the current drawn from the source increases.
  • The Maximum Power Transfer Theorem states that a source delivers the most power to a load when the load's resistance equals the source's internal resistance, but at a limited efficiency of only 50%.
  • Achieving high energy efficiency requires a "mismatched" condition where the load resistance is significantly larger than the source's internal resistance, sacrificing maximum power output.
  • The concept of internal resistance is universal, appearing not only in electronic circuits but also in thermoelectric generators and even biological systems like microbial fuel cells.

Introduction

In the idealized world of physics textbooks, batteries and power supplies are perfect, delivering a constant voltage without fail. In reality, every energy source, from a tiny watch battery to a massive power grid, harbors an internal imperfection that fundamentally limits its performance: ​​internal resistance​​. This unseen property is not merely a theoretical quirk; it is a critical factor that dictates the efficiency, power output, and safety of nearly every electrical device we use. This article bridges the gap between the concept of an ideal voltage source and the practical behavior of real-world devices, addressing why the voltage you measure is often less than what's promised.

We will explore this "ghost in the machine" across two main chapters. First, in "Principles and Mechanisms," we will uncover the fundamental concepts, from the basic voltage drop it causes to the critical trade-off between power and efficiency. Following that, in "Applications and Interdisciplinary Connections," we will see these principles in action, examining the far-reaching impact of internal resistance in fields ranging from audio engineering and high-speed electronics to materials science and biotechnology. By understanding this single concept, we can unlock a deeper appreciation for the engineering compromises at the heart of modern technology.

Principles and Mechanisms

Imagine you have a perfect water pump. It promises to deliver water at a certain pressure, no matter what. Now, imagine a real pump. Inside, it has narrow pipes, rusty bits, and tight corners. When water flows, these imperfections create drag, and the pressure you get at the tap is always a little less than what the pump itself is generating. Electrical sources are no different. An ideal battery or power supply is a physicist's fantasy; a real one has its own internal "stuff" that gets in the way of the flowing charge. We call this a source's ​​internal resistance​​. It's the ghost in the machine, an unseen component that secretly dictates how the source behaves in the real world.

The Ghost in the Machine: Unmasking the Voltage Drop

So, how do we know this internal resistance is there if we can't see it? We can observe its effect. Let's say you have a brand-new battery. If you take a high-quality voltmeter and measure the voltage across its terminals with nothing else connected—what we call the ​​open-circuit voltage​​ (VocV_{oc}Voc​)—you are measuring the battery's full, unburdened potential. In this state, no current is flowing, so the internal resistance has nothing to resist. The voltage you see is the true electromotive force (E\mathcal{E}E) of the battery, the "pressure" the chemical reactions inside are capable of generating.

But the moment you connect a device, say a light bulb or a sensor, a current (III) begins to flow. This current has to travel through the battery's internal gunk—the electrolytes, the electrodes, all the physical materials. This journey isn't free. A portion of the voltage gets "spent" just overcoming this internal resistance (rrr). According to Ohm's law, this internal voltage drop is equal to I⋅rI \cdot rI⋅r. The voltage that's left over for your device, the ​​terminal voltage​​ (VLV_LVL​), is therefore always less than the open-circuit voltage:

VL=E−I⋅rV_L = \mathcal{E} - I \cdot rVL​=E−I⋅r

This is a fundamental truth for any real voltage source. An engineer testing a sensor might find its open-circuit voltage is 9.60 V, but when connected to a circuit, the usable voltage at its terminals drops to 8.25 V. That "missing" 1.35 V didn't vanish; it was lost to the battle against the sensor's own internal resistance.

The Engineer's X-Ray Vision: Thévenin's Elegant Model

To work with this reality, engineers use a wonderfully simple but powerful model. Any complex, linear power source, no matter how intricate its inner workings, can be simplified and represented as two components: an ideal voltage source (E\mathcal{E}E) in series with a single resistor, our internal resistance (rrr). This is known as the ​​Thévenin equivalent circuit​​.

This isn't just a convenient fiction; it's a profound mathematical truth that allows us to characterize any "black box" power source with just two measurements. First, we measure the open-circuit voltage (V1V_1V1​) to find E\mathcal{E}E. Then, we connect a known load resistor (RLR_LRL​) and measure the new, lower terminal voltage (V2V_2V2​). With these values, we can unmask the hidden internal resistance using a little algebra:

r=(V1−V2)RLV2r = \frac{(V_1 - V_2) R_L}{V_2}r=V2​(V1​−V2​)RL​​

This simple model is incredibly versatile. It has an alter-ego, the ​​Norton equivalent circuit​​, which models the source as an ideal current source in parallel with the exact same resistance. The fact that this resistance value is the same in both models tells us it's a truly fundamental property of the source, not just an artifact of our chosen description.

The Inefficiency Tax: Wasted Energy and Heat

That "lost" voltage represents a loss of energy. And in physics, energy never truly disappears; it just changes form. The energy consumed by the internal resistance is converted directly into thermal energy, or heat. This is ​​Joule heating​​. Every time a battery powers your phone or your car, it's not just delivering power to the device; it's also heating itself up.

The power dissipated as heat inside the source is given by Pint=I2rP_{int} = I^2 rPint​=I2r. This internal power loss is an unavoidable "tax" on energy transfer. For a lithium-ion battery powering a sensor, this internal heating might be a small, manageable amount, perhaps just a fraction of a watt. But this wasted power reduces the overall efficiency of the system. The ​​efficiency​​ (η\etaη) is the ratio of the useful power delivered to the load (PLP_LPL​) to the total power generated by the source (E⋅I\mathcal{E} \cdot IE⋅I). It turns out this efficiency depends elegantly on the resistances:

η=PLPtotal=I2RLE⋅I=RLRL+r\eta = \frac{P_L}{P_{total}} = \frac{I^2 R_L}{\mathcal{E} \cdot I} = \frac{R_L}{R_L + r}η=Ptotal​PL​​=E⋅II2RL​​=RL​+rRL​​

Looking at this formula, you can see that to get very high efficiency (close to 1, or 100%), the load resistance RLR_LRL​ must be much, much larger than the internal resistance rrr. This is the goal for low-power electronics where maximizing battery life is critical.

The Grand Compromise: Power vs. Efficiency

Here we arrive at one of the most beautiful and counter-intuitive trade-offs in electronics. What if your goal isn't to be efficient, but to get the absolute maximum power out of your source? What load resistance (RLR_LRL​) should you choose to make your device glow the brightest or your motor spin the fastest?

One might naively think a very small load resistance (a near short-circuit) would draw the most power. But while the current would be huge, the terminal voltage (VL=I⋅RLV_L = I \cdot R_LVL​=I⋅RL​) would collapse to nearly zero, and power (PL=VL⋅IP_L = V_L \cdot IPL​=VL​⋅I) would be tiny. Conversely, a very large load resistance gives you nearly the full EMF as terminal voltage, but the current becomes minuscule, and again the power is tiny.

The peak of the mountain lies exactly in the middle. The ​​Maximum Power Transfer Theorem​​ states that the power delivered to the load is maximized when the load resistance is exactly equal to the internal resistance of the source: RL=rR_L = rRL​=r.

But here is the kicker. What is the efficiency at this point of maximum power transfer? Using our efficiency formula:

\eta_{\text{max_power}} = \frac{r}{r + r} = \frac{r}{2r} = \frac{1}{2}

At maximum power output, the efficiency is precisely 50%. Exactly half of the energy is delivered to your device, and the other half is dissipated as heat inside the power source itself. This is a fundamental compromise. You can have maximum power, or you can have high efficiency, but you can't have both at the same time.

For starting a car, where you need a massive burst of power for a few seconds, a 50% efficiency is perfectly acceptable. But for a remote sensor that needs to run for years on a single battery, this would be a disastrous design. Such systems are deliberately "mismatched" with a load resistance much higher than the source resistance to achieve high efficiency, like 75% or more, at the cost of drawing less instantaneous power.

Beyond Simple Resistors: Dynamic and Unruly Behavior

Our world isn't just made of simple resistors. What happens when a source with internal resistance is connected to something that stores energy, like a capacitor? The situation becomes dynamic. As the capacitor charges, the voltage across it rises, opposing the source EMF. This causes the current to decrease exponentially over time. Consequently, the power wasted in the internal resistance and the power being delivered to the capacitor are both changing moment by moment. There's a fascinating "race" where, initially, most of the power might be wasted as heat, but as the capacitor charges, more of the power is successfully stored in its electric field. One could even calculate the exact instant when the power being wasted equals the power being stored, revealing the intricate dance of energy within the circuit.

Furthermore, the idea of a constant internal resistance is itself a simplification. For many real-world devices, especially batteries, the internal resistance can change with temperature, age, state of charge, and even the amount of current being drawn. A more sophisticated model might treat the internal resistance as a variable that increases with current (Rint=R0+kILR_{\text{int}} = R_0 + k I_LRint​=R0​+kIL​), reflecting the complex electrochemical processes inside. This adds another layer of complexity, but it brings our model one step closer to reality.

When the Ghost Gets Angry: Thermal Runaway

The heat generated by internal resistance is usually a nuisance. But under fault conditions, it can become catastrophic. Consider a modern high-energy-density lithium-ion battery. It packs a huge amount of energy into a small space. Its internal resistance is engineered to be very low, typically a few milliohms (mΩm\OmegamΩ), to allow it to deliver large currents efficiently.

Now, imagine a microscopic defect causes an internal short circuit—a tiny new pathway with a resistance of just a few milliohms connecting the positive and negative electrodes directly. The total resistance in the circuit (rint+Rshortr_{\text{int}} + R_{\text{short}}rint​+Rshort​) is now minuscule. From Ohm's law, I=E/RtotalI = \mathcal{E} / R_{\text{total}}I=E/Rtotal​, a massive current instantly begins to flow inside the cell.

The power dissipated as heat in the internal resistance, Pint=I2rP_{\text{int}} = I^2 rPint​=I2r, skyrockets. The battery begins to heat up, not by a little, but by many degrees per second. This rapid temperature increase can trigger a vicious cycle: the heat causes chemical reactions that release more heat and flammable gases, which in turn causes the battery to heat up even faster. This unstoppable chain reaction is called ​​thermal runaway​​, and it is the terrifying phenomenon behind many battery fires and explosions. It is a stark and powerful reminder that the humble, abstract concept of internal resistance is not just an academic curiosity; it is a critical factor governing the safety and performance of the technology that powers our modern world.

Applications and Interdisciplinary Connections

Now that we have taken apart the clockwork of internal resistance, let's see what it does. We have, so far, treated it as something of a nuisance, a fly in the ointment of our ideal circuits. But nature is rarely so simple, and often, the "nuisances" are where the most interesting physics hides. Understanding this imperfection is not just about accounting for a small loss; it is the key to designing almost every real electrical system, from the stereo in your living room to the probes we send to the outer planets.

The idea that a real source of voltage must expend some of its own energy to push current out into the world is a profound one. This internal struggle is not merely a defect to be tolerated but a fundamental characteristic that governs the flow of power and information. In this chapter, we will embark on a journey to see how this single concept echoes through a surprising variety of fields, revealing a beautiful unity in the principles that govern everything from electronics and materials science to biology itself.

The Heart of Electronics: Power and Signals

Let's begin in the familiar world of electronics. Here, we constantly face a critical trade-off: are we trying to deliver raw power, or are we trying to transmit a delicate signal? The answer dramatically changes how we deal with internal resistance.

Perhaps the most classic application is the challenge of getting the most sound out of your stereo system. An audio amplifier has an internal resistance, and the speaker is the load. To make the speaker cone move with the maximum possible power and produce the loudest, richest sound, you must obey the ​​maximum power transfer theorem​​. This theorem tells us that the greatest power is delivered when the load's resistance is matched to the source's internal resistance. But what if your high-end amplifier has an internal resistance of, say, 8 Ω, and your favorite speaker has a resistance of 2 Ω? A direct connection would be inefficient. This is where engineers get clever. They use a transformer, which is like an electrical gearbox. By choosing the correct turns ratio for the transformer, they can make the 2 Ω speaker appear to the amplifier as an 8 Ω load, achieving a perfect match and extracting every last bit of power. This "impedance matching" is a cornerstone of audio engineering, all dictated by the amplifier's internal resistance.

This matching principle isn't just about a single resistor. Real-world loads are often complex networks. Even so, the rule holds: for maximum power, the equivalent resistance of the entire load network must be equal to the source's internal resistance.

But what happens when our goal shifts from raw power to pristine information? Consider sending a high-speed data pulse down a long coaxial cable, the kind that connects your router to the modem. At high frequencies, the cable itself doesn't look like an open wire; it behaves like a resistor with a value called its "characteristic impedance," Z0Z_0Z0​. A pulse generator, our source, has its own internal resistance, RgR_gRg​. To send the cleanest possible signal without it reflecting back from the end of the cable and causing errors, engineers match the source to the cable, setting Rg=Z0R_g = Z_0Rg​=Z0​. Now, look at what happens the moment the pulse is launched. The source's EMF, E\mathcal{E}E, is driving a circuit with two series resistors: its own internal resistance RgR_gRg​ and the cable's impedance Z0Z_0Z0​. Because we made them equal, they form a simple voltage divider. The voltage that actually enters the cable is not E\mathcal{E}E, but E/2\mathcal{E}/2E/2!. This is a beautiful and at first surprising result. In the quest for signal integrity, we willingly and deliberately throw away half of the source voltage from the very start. It's a fundamental compromise at the heart of all high-speed digital and radio-frequency communication.

The insidious effects of internal resistance don't stop there. It degrades the performance of even the most basic electronic building blocks. A Zener diode regulator, for instance, is designed to provide a rock-steady output voltage. But if it's powered by a real-world supply with its own internal resistance, that resistance adds to the circuit, making the output voltage less stable and more susceptible to changes in the load. Similarly, the performance of a transistor amplifier can be subtly sabotaged. An aging battery's internal resistance can increase over time, which alters the carefully set bias conditions of the transistor, potentially shifting its operating point and distorting the signal it's meant to amplify. Furthermore, the internal resistance of the signal source itself can limit how fast an amplifier can operate. This source resistance, combined with the tiny intrinsic capacitances within the transistor, creates a low-pass filter that blocks high frequencies. The larger the source resistance, the lower the amplifier's bandwidth, a direct consequence of the RC time constant at the input.

Beyond the Circuit Board: Energy, Materials, and Life

The concept of internal resistance, it turns out, is far more universal than just a property of batteries and amplifiers. It appears whenever energy is converted or transmitted.

Let us venture into the realm of materials science and thermodynamics with a Thermoelectric Generator (TEG). These remarkable devices, which power deep-space probes like Voyager, generate electricity directly from a temperature difference—no moving parts required. The Seebeck effect creates a voltage, but the very material that generates this voltage also has electrical resistance. This is the TEG's internal resistance. To make things more interesting, this resistance is not constant; it changes with the device's temperature. Therefore, to extract the maximum power from a TEG—whether it's on a spacecraft or in a system for recovering waste heat from a factory flue—one must continuously match the electrical load to an internal resistance that is itself a moving target, dependent on the operating temperatures.

The story of a real device is a story of compounding imperfections. Even if we create a perfect thermoelectric material, we must connect it to external wires. These connections, at the junction of the thermoelectric material and the metal interconnects, are never perfect. They introduce a "contact resistance," another parasitic effect that adds to the total internal resistance of the generator. This unwanted resistance can severely cripple the device's performance. The fraction of the ideal maximum power that can be achieved is given by a startlingly simple and revealing formula: RTERTE+Rc\frac{R_{TE}}{R_{TE} + R_c}RTE​+Rc​RTE​​, where RTER_{TE}RTE​ is the intrinsic resistance of the material and RcR_cRc​ is the parasitic contact resistance. This shows us that "internal resistance" is really a catch-all term for everything inside the black box of our source that impedes the flow of current.

Now, for our final and most exotic example, let's consider the ultimate power source: life itself. In a Microbial Fuel Cell (MFC), living bacteria consume organic waste and generate electricity. This is not science fiction; it is a burgeoning field of biotechnology. Here, the idea of "internal resistance" becomes a magnificent, complex tapestry of interwoven phenomena. It's not a single component, but a sum of many struggles:

  • ​​Ohmic Resistance:​​ The resistance of the water or sludge that ions must physically travel through to get from the anode (where the bacteria live) to the cathode. Designs with long, tortuous paths for the ions, like an H-shaped cell, suffer from high ohmic resistance.
  • ​​Activation Resistance:​​ The electrochemical "energy barrier" that must be overcome. This includes the effort it takes for the bacteria to shuttle electrons to the anode surface and for oxygen molecules to be chemically reduced at the cathode.
  • ​​Concentration Resistance:​​ This is effectively a microscopic traffic jam. It represents the loss in voltage due to the slowness of delivering "food" (fuel molecules) to the bacteria and clearing away waste products. If the fuel can't get to the bacteria fast enough, the power output drops.

The design of an efficient MFC is a masterclass in engineering compromise. A compact, single-chamber design with an air-breathing cathode minimizes the ohmic resistance by placing the anode and cathode very close. However, this proximity creates a new problem: oxygen from the air can cross over and "steal" the electrons at the anode, lowering the cell's efficiency. The various geometries of MFCs are all attempts to find the sweet spot, minimizing the sum of all these different forms of internal resistance at once.

From a simple resistor inside a battery to the intricate bio-electrochemistry of a bacterial colony, the principle of internal resistance provides a unifying language. It describes the fundamental limitations on the transfer of energy and information in any real physical system. It is the price we pay for living in a universe governed by the laws of thermodynamics and transport, not in an idealized Platonic world of perfect sources and lossless wires. Understanding it, in all its varied forms, is what separates a student who can solve a textbook problem from an engineer who can build a working radio, a materials scientist who can design an efficient solar cell, or a biologist who can power a remote sensor with pond scum. It is, in essence, the physics of the possible.