try ai
Popular Science
Edit
Share
Feedback
  • Programmable Logic Array

Programmable Logic Array

SciencePediaSciencePedia
Key Takeaways
  • A Programmable Logic Array (PLA) implements any logic function in Sum-of-Products form using a programmable AND-plane and a programmable OR-plane.
  • The PLA's primary advantage is its ability to share product terms across multiple outputs, leading to highly efficient logic implementations.
  • PLAs are ideal for "sparse" logic functions and can serve as the combinational logic core for complex sequential circuits like state machines.
  • Compared to PALs and ROMs, PLAs offer the most flexibility but often at the cost of speed and manufacturing complexity, highlighting a fundamental engineering trade-off.

Introduction

In the world of digital electronics, the quest for a single, versatile component that can be configured to perform a wide variety of logical tasks is a central theme. Rather than designing unique, custom-made chips for every calculator, controller, or simple computer, engineers sought a universal building block. This challenge gave rise to the family of programmable logic devices, and among them, the Programmable Logic Array (PLA) stands out as a particularly elegant and flexible solution. The PLA offers a direct hardware implementation of a fundamental principle: any logical function can be expressed as a Sum-of-Products. This article explores the architecture and application of this powerful device. The journey begins in the section "Principles and Mechanisms," where we will dissect the PLA's structure, understanding its programmable AND and OR planes and how they work together. We will also place the PLA in context by comparing it to its architectural relatives, the PAL and ROM, to understand the crucial trade-offs between flexibility, cost, and speed. Following this, the "Applications and Interdisciplinary Connections" section will showcase the PLA in action, demonstrating how it can be used to forge arithmetic circuits, implement complex state machines, and even overcome the physical limitations of digital hardware.

Principles and Mechanisms

Imagine you want to build a machine, a single, universal chip that you could teach to perform almost any logical task you could dream up. You wouldn't need a different chip for your calculator, another for your traffic light controller, and a third for a simple game. You'd have one, reconfigurable block of silicon that could become any of them. This is the grand promise of programmable logic, and at its heart lies a beautifully simple and powerful structure: the ​​Programmable Logic Array (PLA)​​.

To understand this machine, we first need to grasp the language it speaks. The language of digital logic, as it turns out, has a universal grammar known as the ​​Sum-of-Products (SOP)​​ form. Any logical rule, no matter how complex, can be broken down into this two-step recipe.

The Universal Logic Recipe: Sum-of-Products

Think of building a logical function like preparing a meal. The first step is to create your basic components. In logic, these components are called ​​product terms​​. A product term is simply a group of input signals (or their opposites) all connected by the logical AND operation. For instance, if you have inputs AAA, BBB, and CCC, a product term might be something like A⋅B⋅C‾A \cdot B \cdot \overline{C}A⋅B⋅C (read as "A and B and not C"). This term is 'true' only for one specific combination of inputs. Another, simpler product term could be just A‾⋅B\overline{A} \cdot BA⋅B.

The second step is to combine these components. This is done with the logical OR operation, creating a "sum" of the product terms. For example, a complete function FFF might be expressed as F=A‾B+A‾CF = \overline{A}B + \overline{A}CF=AB+AC. This expression is in Sum-of-Products form. It states that the output FFF is true if (NOT A AND B) is true, OR if (NOT A AND C) is true. This two-level structure—first ANDing, then ORing—is the foundational principle. The magic of a PLA is that it provides a physical architecture that directly mirrors this universal recipe.

Anatomy of a Logic Factory: The AND and OR Planes

A PLA can be visualized as a tiny, two-stage factory floor laid out on a grid. It consists of two main sections: a programmable ​​AND-plane​​ and a programmable ​​OR-plane​​.

Imagine the inputs to our factory, say AAA, BBB, CCC, and DDD, running vertically down the grid. For every input, we have two lines: the input itself (the "true" line, e.g., AAA) and its logical opposite (the "complement" line, e.g., A‾\overline{A}A). So, for nnn inputs, we have 2n2n2n vertical input lines.

Running horizontally across this grid are the "product term lines." Each of these lines is essentially a wire connected to an AND gate. The points where the vertical input lines and the horizontal product term lines cross are special; they are ​​programmable fuses​​. By "blowing" or "keeping" a fuse at an intersection, we can connect a specific input (like BBB) or its complement (like B‾\overline{B}B) to a particular product term line.

Let's say we want to create the product term P2=A‾BC‾DP_2 = \overline{A}B\overline{C}DP2​=ABCD. We would take a product term line and program its connections to the vertical lines as follows: we keep the fuses connecting it to the A‾\overline{A}A line, the BBB line, the C‾\overline{C}C line, and the DDD line, and we blow all the other fuses on that row. The AND gate at the end of this line will now only output a '1' when that precise combination of inputs is present. This AND-plane is our "ingredient preparation" station. It doesn't create all possible product terms, only the ones we've programmed it to make.

The outputs of these product term lines then flow into the second section of our factory: the ​​OR-plane​​. This plane is another grid. This time, the horizontal product term lines are the inputs, and new vertical lines represent the final outputs of the chip, say F1F_1F1​ and F2F_2F2​. Once again, the intersections contain programmable fuses. By keeping a fuse, we connect a specific product term to a final output's OR gate.

If we want our final function to be F1=P1+P2+P3F_1 = P_1 + P_2 + P_3F1​=P1​+P2​+P3​, we simply connect the product term lines for P1P_1P1​, P2P_2P2​, and P3P_3P3​ to the OR gate for F1F_1F1​. A complete blueprint for this factory floor can be laid out in a simple table, a ​​PLA programming map​​, that explicitly shows which inputs form each product term, and which product terms form each output function.

The "size" or capacity of a PLA is thus neatly described by three numbers: the number of inputs (nnn), the number of product terms it can create (ppp), and the number of outputs it can generate (mmm). The total programmability—the number of fuses—is given by the sum of fuses in both planes: (2n×p)(2n \times p)(2n×p) for the AND-plane and (p×m)(p \times m)(p×m) for the OR-plane. So, the total number of programmable points is p(2n+m)p(2n + m)p(2n+m).

The Art of Efficiency: Sharing Product Terms

Here we arrive at the true elegance of the PLA design. Why have two programmable planes? Why not just have a fixed set of components? The answer is efficiency, achieved through ​​sharing​​.

Because the OR-plane is also programmable, any product term generated in the AND-plane can be "shared" by multiple output functions. Imagine we need to implement three functions:

  • F1=P1+P2F_1 = P_1 + P_2F1​=P1​+P2​
  • F2=P1+P3F_2 = P_1 + P_3F2​=P1​+P3​
  • F3=P2+P3F_3 = P_2 + P_3F3​=P2​+P3​

Instead of building each function from scratch, which would require generating six product terms in total (two for each function), a PLA allows us to be much smarter. We simply program the AND-plane to create the three unique product terms, P1P_1P1​, P2P_2P2​, and P3P_3P3​, just once. Then, in the OR-plane, we wire them up as needed: P1P_1P1​ and P2P_2P2​ go to the OR gate for F1F_1F1​, P1P_1P1​ and P3P_3P3​ go to the OR gate for F2F_2F2​, and so on. This is like different chefs in a large kitchen all using the same batches of prepped ingredients—a shared resource pool that dramatically reduces waste and effort.

This sharing is the PLA's superpower. When designing a system with multiple outputs that have overlapping logic, we can first identify the complete set of unique product terms needed across all functions. Then we can calculate the minimum resources required for the entire system. Sometimes, what look like two different functions might even simplify to the exact same sum-of-products expression, allowing them to be implemented with an identical set of shared terms.

A Tale of Two Architectures: PLA vs. PAL and ROM

The PLA's design, with its two programmable planes, is the most flexible of its kind. But this flexibility comes at a cost, and to appreciate it, we must compare it to its cousins in the programmable logic family.

First, consider the ​​Read-Only Memory (ROM)​​. From a logic perspective, a ROM can be seen as having a fixed AND-plane and a programmable OR-plane. Its AND-plane is an exhaustive decoder; for nnn inputs, it permanently generates all 2n2^n2n possible product terms (minterms). Your only job is to program the OR-plane to pick which of these minterms you want for your output function. This is like a kitchen that has every conceivable ingredient already prepared in small dishes. You just grab the ones your recipe calls for. For functions that are very "dense" (using many different minterms), a ROM is great. But for "sparse" functions, it's incredibly wasteful, as most of the generated minterms are never used.

Next, and more importantly, is the ​​Programmable Array Logic (PAL)​​. A PAL device is a compromise. Like a PLA, it has a programmable AND-plane, allowing you to create custom product terms. However, its OR-plane is fixed. This means that each output's OR gate is hardwired to a specific, limited group of product term lines. You can still customize the ingredients, but each chef is given a fixed recipe card that only allows certain combinations. You lose the PLA's powerful ability to freely share any product term with any output.

The Real-World Verdict: Why Simpler Can Be Better

Given the PLA's superior flexibility, one might expect it to have reigned supreme. Yet, historically, the simpler PAL architecture became far more commercially successful. Why would the market favor a less flexible device? The answer is a classic engineering trade-off between perfection and practicality: ​​speed and cost​​.

The PLA's second programmable plane, the source of its flexibility, is also its Achilles' heel. Every one of those programmable fuses, and the wiring needed to access them, adds a tiny amount of electrical capacitance and resistance to the circuit. When you have a vast, fully interconnected grid, this parasitic capacitance adds up. Signals moving through this dense web slow down, just as it's slower to run through a thick forest than an open field. This made PLAs inherently slower than their PAL counterparts.

The PAL's fixed OR-plane, with its direct, permanent connections, was a much cleaner, faster electrical path. Furthermore, the simpler structure required less silicon area, making PALs cheaper to manufacture and leading to higher production yields. For the vast majority of real-world applications, the speed and cost benefits of the PAL architecture outweighed the theoretical flexibility of the PLA. The market had spoken: a "good enough" solution that is fast and cheap often beats a "perfect" solution that is slow and expensive.

This story of the PLA is more than just a lesson in digital design; it's a window into the very nature of engineering. It showcases the beauty of a universal concept—the Sum-of-Products—and the elegance of an architecture built to realize it. But it also reminds us that even the most elegant designs must contend with the messy realities of physics and economics, where trade-offs are king and practical performance often wins the day.

Applications and Interdisciplinary Connections

Having peered into the inner workings of the Programmable Logic Array (PLA), we might be tempted to see it as a neat but abstract curiosity—a clever grid of ANDs and ORs. But to do so would be like admiring the blueprint of a grand cathedral without ever stepping inside to witness its purpose. The true beauty of the PLA, as with any great tool in science and engineering, lies not in its structure alone, but in what it allows us to build. Its programmable nature makes it a universal canvas for digital thought, a bridge between the ethereal realm of Boolean algebra and the tangible world of functioning machines. Let us now embark on a journey to see where this remarkable device finds its home.

The Bedrock of Computation: Forging Arithmetic

At the very heart of any computer, from the simplest pocket calculator to the most powerful supercomputer, lies the ability to perform arithmetic. How does a machine "add" or "subtract"? It does so by manipulating bits according to the rules of logic. The PLA provides a wonderfully direct way to translate these rules into hardware.

Consider the simplest act of addition: adding two single bits, AAA and BBB. The result consists of a Sum bit, SSS, and a Carry bit, CCC. As we've seen, these are just Boolean functions: S=A⊕BS = A \oplus BS=A⊕B and C=ABC = ABC=AB. A PLA can be programmed to generate the necessary product terms—in this case, A‾B\overline{A}BAB, AB‾A\overline{B}AB, and ABABAB—and then combine them to produce the two outputs simultaneously. This simple circuit is a "half-adder," and with a small PLA, we can construct one with elegant efficiency.

By linking these basic arithmetic blocks together, we can tackle more complex operations. A "full subtractor," for example, is a circuit that subtracts two bits while also accounting for a "borrow" from a previous stage—a crucial component for multi-bit subtraction. This, too, is merely a set of logic functions that can be mapped directly onto a PLA's programmable fabric. By chaining these fundamental arithmetic and logic units (ALUs), we build the very engine of computation. The PLA, in this sense, is not just a component; it is a way to sculpt the raw material of silicon into the fundamental building blocks of mathematical reasoning.

The Art of Efficiency: The Power of Sharing

Here we arrive at one of the most elegant features of the PLA architecture: its inherent efficiency in handling multiple, related tasks. Unlike simpler devices like a Programmable Array Logic (PAL), which has a fixed OR-plane, the PLA's fully programmable AND and OR planes allow for a remarkable optimization: the sharing of product terms.

Imagine you are a craftsman tasked with building two different, but similar, pieces of furniture. You notice that both require an identical, intricately carved leg. Would you carve this leg twice? Of course not. You would carve it once and use it where needed for both pieces. The PLA does precisely this.

Suppose we need to implement two distinct logic functions, F1F_1F1​ and F2F_2F2​. After simplifying them, we might find they have a product term in common. For instance, we might find that F1=A‾B+ACF_1 = \overline{A}B + ACF1​=AB+AC and F2=AB‾+ACF_2 = A\overline{B} + ACF2​=AB+AC. A PAL would need to generate four product terms in total—two for each function. A PLA, however, recognizes that the term ACACAC is common. It generates the three unique terms (A‾B\overline{A}BAB, AB‾A\overline{B}AB, and ACACAC) in its AND-plane just once. Then, its programmable OR-plane simply taps the ACACAC line for both the F1F_1F1​ and F2F_2F2​ outputs. This sharing is the PLA's superpower.

This isn't just a minor improvement. For a system that needs to detect multiple conditions simultaneously—say, one function FFF that detects prime numbers and another function GGG that detects a different set of numbers—there might be an overlap in the logic. By identifying and sharing the common product terms, the PLA can implement the entire system with fewer resources than a PAL, saving silicon area and power. This principle of multi-output minimization is a cornerstone of modern digital design, and the PLA is its physical embodiment.

Finding the Right Tool: PLAs, ROMs, and the Nature of Information

To truly appreciate the PLA, we must see it in context. It is not the only universal logic device. Another is Read-Only Memory (ROM). A ROM can implement any logic function: you simply use the input variables as an address and store the desired output bit at that address. So when would we choose a PLA over a ROM?

The answer lies in the "sparsity" of the logic. Imagine a function of 6 inputs (64 possible combinations) that is 'true' for only 10 of those combinations, and we don't care what it is for the rest. To use a ROM, we would need a memory of 26=642^6 = 6426=64 words, one for each possible input, even though most of them are unused or irrelevant. The ROM is like a dictionary containing every possible word, just in case we need it.

A PLA, on the other hand, is optimized for this kind of sparse problem. Before implementing the function, we use logic minimization techniques (like the Karnaugh maps we've explored) to find the simplest expression, taking full advantage of all the "don't care" conditions. It might turn out that our function, covering 10 specific cases, can be described with just three or four clever product terms. The PLA only needs to create hardware for those few terms. It is like a special-purpose recognizer that only looks for a few specific patterns, ignoring everything else. For such sparse functions, a PLA can be dramatically smaller and more efficient than a ROM. Of course, for complex functions, this minimization is not done by hand; it's the domain of powerful computer-aided design (CAD) tools like the Espresso heuristic, which automatically find a near-optimal set of shared product terms for large, multi-output systems.

Giving Logic a Memory: The Dawn of State Machines

So far, we have seen PLAs as calculators for combinational logic, where the output depends only on the present input. But the world is not so simple. Actions have consequences that influence future events. We need circuits with memory. By connecting the PLA's outputs back to its inputs through storage elements (like D-type flip-flops), we create a feedback loop. The circuit can now "remember" its current condition, or "state," and its next action can depend on both the new inputs and its past. We have just built a state machine.

In this arrangement, the PLA serves as the "brain" of the operation. It takes the current state and the current inputs, and based on its programmed logic, it calculates two things: the desired machine outputs for the current state, and the next state to transition to.

A classic example is a decade counter, a circuit that cycles through the numbers 0 to 9 and then repeats. The current number is the state. The PLA's job is to look at the current number (say, 3, which is 0011 in binary) and compute the next number (4, or 0100). By minimizing the logic for all four next-state bits across all ten states, the PLA implements the counting sequence with remarkable efficiency.

This concept scales up to control incredibly complex systems. Consider an automated signaling system for a model train crossing. The controller must "know" if it's in an 'Idle' state (green light), a 'Warning' state (yellow light), or a 'Crossing' state (red light, gate down). Its behavior is defined by an Algorithmic State Machine (ASM) chart, a flowchart for hardware. This high-level description can be translated directly into a set of Boolean equations for the outputs and the next-state logic. A PLA is the perfect device to implement this logic, acting as the intelligent controller that reads sensors (train approaching, train on crossing) and manipulates lights and gates according to its programmed rules. This application is a beautiful bridge connecting the abstract fields of automata theory and digital logic with the practical world of control systems and mechatronics.

The Subtle Physics of Logic: Taming the Hazards

Our journey ends with a look at a deeper, more subtle aspect of digital design. Our models of logic gates are idealizations. In the real world, signals take a finite amount of time to propagate through wires and transistors. This can lead to unexpected and unwelcome behavior.

Imagine an input to our circuit changes. For a fleeting moment, while the signals are racing to their new levels, the output might flicker to an incorrect value before settling. This momentary glitch is called a "hazard." In an asynchronous circuit, where there is no master clock to tell the system when to look at the outputs, such a hazard can be disastrous, potentially being misinterpreted as a valid signal and throwing the entire system into an incorrect state.

How do we fight this? Remarkably, we use logic to defeat the imperfections of physics. By carefully examining the transitions between states on a Karnaugh map, we can identify potential hazards. The solution is often to add an extra, seemingly redundant product term to the PLA's logic. This term's purpose isn't to change the static logic function, but to act as a bridge, keeping the output stable during a critical transition. The programmable AND-plane of a PLA is perfectly suited for adding these specific hazard-covering terms, ensuring that the circuit's real-world behavior matches its ideal, glitch-free design. Here, the PLA is not just implementing abstract math; it is taming the very physics of its own operation.

From the simple dance of bits in an adder to the intricate choreography of a state machine and the subtle art of hazard prevention, the Programmable Logic Array reveals itself as a tool of profound versatility. It is a testament to the power of a simple, regular structure to create nearly boundless complexity, a beautiful manifestation of how abstract logical ideas are forged into the engines that power our digital world.