try ai
Popular Science
Edit
Share
Feedback
  • 1's complement

1's complement

SciencePediaSciencePedia
Key Takeaways
  • One's complement represents a negative number by inverting all the bits (flipping 0s to 1s and 1s to 0s) of its positive counterpart.
  • Arithmetic in one's complement, including subtraction via addition, requires an 'end-around carry' where the carry-out bit is added back to the result.
  • A significant characteristic and historical drawback of the one's complement system is its dual representation for zero (+0 and -0), which complicates hardware logic.
  • Though largely replaced by two's complement, one's complement was crucial in early computer design for simplifying subtraction circuitry using only inverters and adders.

Introduction

In the digital world of computers, which operates on the simple binary states of 0s and 1s, representing positive numbers is straightforward. However, the challenge of efficiently representing and calculating with negative numbers gave rise to several ingenious solutions. One of the earliest and most elegant of these is the one's complement system. This article delves into this fascinating method, addressing the fundamental problem of how to build a complete system of signed arithmetic from the ground up. You will explore the core principles behind one's complement, from its simple bit-flipping negation to its peculiar dual-zero issue and the clever 'end-around carry' technique that makes arithmetic possible. Following this, the discussion will shift to its real-world impact, examining the applications and interdisciplinary connections that reveal how this abstract system was embodied in early computer hardware and continues to offer valuable lessons in digital design and logic.

Principles and Mechanisms

Imagine you are in a room with a long row of light switches. Each switch can be either on or off. This is the world of binary, the fundamental language of computers. The most basic action you can take is to walk down the line and flip every single switch to its opposite state: every 'on' becomes 'off', and every 'off' becomes 'on'. This simple, universal inversion is the heart and soul of the one's complement system.

The Elegance of the Flip: From Inversion to Negation

Let's start with this fundamental operation, the ​​bitwise NOT​​. In the world of digital logic, this is the simplest, most primitive transformation. Consider an 8-bit register in a microcontroller, perhaps used as a "mask" to control eight different data channels, where 1 means a channel is active and 0 means it's disabled. Suppose the mask is set to 01101001201101001_2011010012​. To temporarily disable all active channels and enable all inactive ones, the processor performs a bitwise NOT, flipping each bit individually. The 0s become 1s and the 1s become 0s, transforming 01101001201101001_2011010012​ into 10010110210010110_2100101102​.

There's a beautiful symmetry to this operation. If you flip all the switches, and then you walk down the line and flip them all again, you end up exactly where you started. This is a self-inverting or involutive property. Applying the NOT operation twice returns the original number, N‾‾=N\overline{\overline{N}} = NN=N. This principle is so reliable that some simple communication protocols have used it for data integrity checks: a sender flips the bits of a message before sending it, and the receiver flips them back to recover the original data.

This elegant bit-flipping is not just a party trick; it's the key to building a system for representing signed numbers. Early computer designers asked: how can we represent a negative number, like −25-25−25? The one's complement system offers a wonderfully direct answer: to find the representation of a negative number, start with its positive counterpart and simply flip all the bits.

Let's try it for −25-25−25 using 8 bits. The positive number 252525 in binary is 00011001200011001_2000110012​. To find its negative twin, −25-25−25, we apply the bitwise NOT:

000110012‾=111001102\overline{00011001_2} = 11100110_2000110012​​=111001102​

And there you have it. In this system, 11100110211100110_2111001102​ is the machine's way of writing −25-25−25. Conversely, if a "digital archaeologist" were examining an old 16-bit machine and found the hexadecimal value BEEF16BEEF_{16}BEEF16​, they would first note its leading bit is 1 (since B16=10112B_{16} = 1011_2B16​=10112​), indicating a negative number. To find its magnitude, they would flip all the bits of 1011 1110 1110 111121011\ 1110\ 1110\ 1111_21011 1110 1110 11112​ to get 0100 0001 0001 000020100\ 0001\ 0001\ 0000_20100 0001 0001 00002​, which is 4110164110_{16}411016​ or 166561665616656 in decimal. Thus, the original value was −16656-16656−16656.

This method is just one of several ways to represent negative numbers. It's instructive to compare it to its cousins. For instance, in a ​​sign-magnitude​​ system, you'd simply take the positive value (00011001200011001_2000110012​ for 252525) and flip only the far-left "sign" bit, giving 10011001210011001_2100110012​ for −25-25−25. The now-standard ​​two's complement​​ system goes one step further than one's complement: it flips all the bits and then adds one, yielding 11100111211100111_2111001112​ for −25-25−25. Each system is a different answer to the same fundamental question, with its own unique consequences.

A Tale of Two Zeros

The design choice of one's complement—defining negation as a simple bit flip—leads to a peculiar and fascinating consequence, a ghost in the machine. What happens if we apply our rule to the number zero?

Positive zero is, of course, represented by all zeros: 00000000200000000_2000000002​. If we take its one's complement to find "negative zero," we flip every bit and get 11111111211111111_2111111112​.

In the mathematical reality we all know and love, +0+0+0 and −0-0−0 are the same thing. But inside a machine using one's complement, we have two distinct bit patterns for the same value: an "all-zeros" zero and an "all-ones" zero. This is a profound complication. A computer must constantly check if a result is zero, and in this system, the hardware logic must be explicitly designed to recognize both patterns as zero. This ambiguity makes comparisons and other logical operations more complex and slower, a key reason why the two's complement system, which cleverly avoids this problem, ultimately won the day.

This feature of "two zeros" directly shapes the range of numbers the system can represent. For an NNN-bit system, there are 2N2^N2N possible patterns. One pattern is used for positive zero (00...0200...0_200...02​). Half of the remaining patterns are used for positive numbers, and the other half for negative numbers, including the second zero (11...1211...1_211...12​). This results in a perfectly symmetric range. For a 12-bit system, the largest positive number is 0111 1111 111120111\ 1111\ 1111_20111 1111 11112​, which is 211−12^{11}-1211−1, or 204720472047. The most negative number is its bitwise inverse, 1000 0000 000021000\ 0000\ 0000_21000 0000 00002​, which represents −(211−1)-(2^{11}-1)−(211−1), or −2047-2047−2047. The range is perfectly balanced, from −2047-2047−2047 to +2047+2047+2047, but at the cost of that strange dual zero.

The Magic Circle: Arithmetic with End-Around Carry

So, we have a system for writing numbers that is elegant but has a quirky dual zero. Can we at least do arithmetic with it? Let's try adding two negative numbers, say, −3-3−3 and −4-4−4, using a 4-bit system.

  • First, we represent them in 4-bit one's complement:

    • +3+3+3 is 001120011_200112​, so −3-3−3 is 110021100_211002​.
    • +4+4+4 is not representable in 4-bit one's complement, because the range is −(23−1)-(2^3-1)−(23−1) to 23−12^3-123−1, which is [−7,7][-7, 7][−7,7]. So let's try a different example, like −2+(−3)-2 + (-3)−2+(−3).
    • +2+2+2 is 001020010_200102​, so −2-2−2 is 110121101_211012​.
    • +3+3+3 is 001120011_200112​, so −3-3−3 is 110021100_211002​.
  • Now, let's add them using standard binary addition:

    \begin{array}{@{}c@{\,}c@{}c@{}c@{}c@{}c} & & 1 & 1 & 0 & 1 & \quad (-2) \\ + & & 1 & 1 & 0 & 0 & \quad (-3) \\ \hline & 1 & 1 & 0 & 0 & 1 & \end{array}

The result is a 5-bit number, 11001211001_2110012​. Our 4-bit machine can only hold the four rightmost bits, 100121001_210012​. The leftmost bit, the 1 that "fell off" the end, is called the ​​carry-out bit​​. If we just look at the 4-bit result, 100121001_210012​, its one's complement is 011020110_201102​ (or 6), so 100121001_210012​ represents −6-6−6. But we know that −2+(−3)-2 + (-3)−2+(−3) should be −5-5−5. The answer is wrong!

At first glance, this seems like a failure. But here lies the genius of the system. That leftover carry bit is not an error to be discarded; it’s a message from the far end of the number, a signal telling us what to do next. The rule is this: whenever an addition results in a carry-out bit, you must take that 1 and add it back to the least significant bit of your result. This is called an ​​end-around carry​​.

Let's apply it to our sum:

\begin{array}{@{}c@{\,}c@{}c@{}c@{}c@{}c} & & & 1 & 0 & 0 & 1 & \quad (\text{Initial sum}) \\ + & & & 0 & 0 & 0 & 1 & \quad (\text{End-around carry}) \\ \hline & & & 1 & 0 & 1 & 0 & \end{array}

The final result is 101021010_210102​. To see what this represents, we flip the bits to get 010120101_201012​, which is 555. So, 101021010_210102​ is indeed −5-5−5. The magic works!

This "end-around carry" isn't just a clever hack; it's the physical manifestation of a deep mathematical truth. A standard NNN-bit adder performs arithmetic modulo 2N2^N2N. However, the one's complement system, with its dual zero, naturally operates in a world of arithmetic modulo (2N−1)(2^N-1)(2N−1). These two worlds are almost the same, differing by exactly one. The congruence 2N≡1(mod2N−1)2^N \equiv 1 \pmod{2^N-1}2N≡1(mod2N−1) is the mathematical bridge. The carry-out bit represents a value of 2N2^N2N, and by adding it back in as a 1, we are effectively forcing the modulo 2N2^N2N hardware to compute the correct modulo (2N−1)(2^N-1)(2N−1) result. In terms of circuitry, this is beautifully simple: you just connect the adder's CoutC_{out}Cout​ (carry-out) wire back to its CinC_{in}Cin​ (carry-in) terminal, creating a feedback loop where the number line bites its own tail.

This unified system makes other operations, like subtraction, fall into place naturally. To compute A−BA - BA−B, the machine simply calculates A+(−B)A + (-B)A+(−B). It finds the one's complement of BBB (bitwise NOT) and performs addition using the end-around carry rule. For example, to calculate 60−2060 - 2060−20 in an 8-bit system, we take A=001111002A = 00111100_2A=001111002​ (60) and B=000101002B = 00010100_2B=000101002​ (20). We compute A+B‾A + \overline{B}A+B:

001111002+000101002‾=001111002+111010112=100100111200111100_2 + \overline{00010100_2} = 00111100_2 + 11101011_2 = 100100111_2001111002​+000101002​​=001111002​+111010112​=1001001112​

We have a carry-out of 1, so we add it back: 001001112+1=00101000200100111_2 + 1 = 00101000_2001001112​+1=001010002​. This result is 32+8=4032 + 8 = 4032+8=40, which is exactly right. The entire system of arithmetic—negation, addition, and subtraction—is built from two simple, elegant ideas: the bit-flip and the end-around carry.

Applications and Interdisciplinary Connections

It is one thing to play with a mathematical idea on paper, to understand its rules and properties in the abstract. It is quite another, and a far more magical thing, to see that idea take physical form, to become part of the intricate clockwork of a machine that computes and thinks. The one's complement system, which we have explored, is a marvelous case study in this transformation from pure logic to practical reality. Though largely superseded by the more streamlined two's complement system in modern computers, its story is not just a historical footnote. It is a rich and beautiful lesson in the art of engineering, revealing how cleverness, elegance, and even peculiar ghosts can arise when we build machines to do our arithmetic.

Let us embark on a journey to see where this fascinating number system comes to life, from the simplest sliver of silicon to the grand architecture of algorithms and abstract logic.

The Physical Embodiment: From an Idea to a Gate

How does a machine actually "find" the one's complement of a number? The answer is beautifully, almost poetically, simple. To find the one's complement, we "flip" every bit—every 0 becomes a 1, and every 1 becomes a 0. In the world of digital electronics, this operation is the fundamental job of a logic gate known as an inverter, or a NOT gate.

Imagine a 4-bit number flowing along four parallel wires. To compute its one's complement, all we need to do is place a NOT gate on each wire. The set of signals coming out is, by definition, the one's complement of the set of signals that went in. There is a perfect, one-to-one correspondence between the mathematical operation of negation and the physical action of the circuit. This is the first and most fundamental application: the idea is made manifest in silicon.

The Art of Arithmetic Without Subtraction

The true genius of complement systems is that they provide a clever way to perform subtraction using the very same hardware that performs addition. This was a monumental breakthrough in early computer design, as it drastically simplified the required circuitry. Instead of building a separate, complex "subtractor" unit, engineers could get away with just an "adder" and an "inverter."

The rule is simple: to compute M−SM - SM−S, the machine instead computes M+(one’s complement of S)M + (\text{one's complement of } S)M+(one’s complement of S). Let's say an early microprocessor needs to calculate 10012−110021001_2 - 1100_210012​−11002​. It first takes the one's complement of 110021100_211002​, which is 001120011_200112​. Then, it simply adds this to 100121001_210012​, yielding the result 110021100_211002​. The subtraction has vanished, replaced by an inversion and an addition.

But there is a subtle and beautiful piece of magic required to make this work in all cases. What happens when the addition produces a carry-out bit from the most significant position? In a normal addition, this might signal an overflow. But in one's complement arithmetic, this stray bit is the key to the whole affair. The system employs a trick called an ​​end-around carry​​. The carry-out bit is not discarded; it is "wrapped around" and added back to the least significant bit of the result.

Consider calculating +50−35+50 - 35+50−35. In 8-bit one's complement, this becomes an addition of the patterns for +50+50+50 (00110010200110010_2001100102​) and −35-35−35 (11011100211011100_2110111002​). The initial binary addition yields a sum of 00001110200001110_2000011102​ with a carry-out of 1. That '1' is the hero. It is added back to the result, turning 00001110200001110_2000011102​ into 00001111200001111_2000011112​, which is the correct representation for +15+15+15. The same elegant rule works when adding two negative numbers, like −9-9−9 and −19-19−19. The initial sum again produces a carry-out, which, when added back, produces the correct negative result. This end-around carry is like a tiny correction factor, a whisper from the machine's arithmetic soul, that ensures the logic holds together perfectly.

Beyond Simple Addition: Complexities and Quirks

While the end-around carry is a beautiful solution for addition and subtraction, the story becomes more complex when we venture into other operations. Here, the unique personality of one's complement—its quirks and subtleties—truly begins to shine, offering deep lessons for the aspiring digital architect.

​​A Word of Caution on Shortcuts​​

In digital design, engineers love efficient tricks. One of the most common is using bit-shifting to perform multiplication or division by powers of two. A single logical left shift (moving all bits one position to the left and filling the last bit with a 0) is equivalent to multiplying by two for unsigned numbers. But can we use this trick in a one's complement system?

Let's try to multiply −1-1−1 by four using two logical left shifts. In 4-bit one's complement, −1-1−1 is 111021110_211102​. Shifting it left twice yields 100021000_210002​, which represents −7-7−7. But the correct answer is, of course, −4-4−4. The shortcut has failed spectacularly. Why? The logical left shift does not respect the sign bit. It blindly shifts bits out, potentially changing a negative number into a positive one or mangling its value. This reveals a crucial principle: algorithms and hardware tricks are not universally applicable; they must be tailored to the specific rules of the number system they operate on.

​​The Ghost in the Machine: Negative Zero​​

Perhaps the most famous feature of one's complement is its dual representation of zero. There is "positive zero" (0000...020000...0_20000...02​) and "negative zero" (1111...121111...1_21111...12​). While they have the same mathematical value, they are distinct bit patterns. This duality can lead to strange and wonderful results.

Consider the integer −1-1−1, represented in 8 bits as 11111110211111110_2111111102​. What happens if we perform an arithmetic right shift, which shifts all bits to the right but preserves the sign bit? The rightmost 0 is discarded, all other bits shift right, and the leftmost bit (the sign bit, a 1) is copied into the newly vacant position. The result is 11111111211111111_2111111112​—negative zero. An operation that should approximate division by two has instead turned −1-1−1 into 000. This quirk is a major reason why designers eventually favored two's complement, which has only a single, unambiguous representation for zero.

​​Building Intelligent Algorithms​​

Despite its eccentricities, one's complement was the foundation for the arithmetic logic units (ALUs) of many early computers. Engineers learned to work with its properties to build complex algorithms. For instance, multiplication wasn't done in a single step. It was often implemented sequentially, through a series of additions and shifts. To compute (−5)×(+3)(-5) \times (+3)(−5)×(+3), the machine would load the representations for −5-5−5 and +3+3+3 and then, guided by the bits of the multiplier, would repeatedly add the multiplicand to a running total, shifting the result at each step. The fundamental operation of one's complement addition, with its end-around carry, served as the atomic building block for this more complex process.

Engineers even adapted highly efficient algorithms like Booth's algorithm for one's complement machines. Booth's algorithm speeds up multiplication by processing bits in groups. However, it was originally designed with two's complement in mind. A direct application to a one's complement number would yield the wrong answer. The solution? A beautiful piece of insight. Engineers realized that the standard Booth's algorithm on a negative one's complement number YYY actually computes the product with Y−1Y-1Y−1. Therefore, to get the correct answer, a final correction step is needed: simply add the multiplicand MMM back to the result. This is a prime example of engineering ingenuity: when a powerful tool doesn't quite fit, you don't discard it; you build an adapter.

Interdisciplinary Vistas: From Hardware to Abstract Systems

The influence of one's complement extends beyond pure arithmetic, connecting to the broader fields of computer science and logic.

​​The Machine as a Watcher: Automata Theory​​

Any number in a computer is, at its heart, just a pattern of bits. The one's complement representation of −3-3−3 in 4 bits is 110021100_211002​. This is not just a number; it's a sequence. We can design a digital circuit known as a Finite State Machine (FSM) to "watch" a stream of incoming bits and raise a flag the moment it sees the pattern 110021100_211002​. Such sequence detectors are fundamental components in networking, data processing, and control systems. This application shows that the specific bit patterns generated by a number system have implications for pattern recognition and the design of sequential logic, connecting the world of computer arithmetic to the theoretical domain of automata.

​​Encoding New Realities: A Glimpse of Ternary Logic​​

Perhaps the most mind-expanding application is conceptual. Can we use a binary system to represent something other than binary numbers? Consider a hypothetical processor designed to work with a three-valued (ternary) logic system with states TRUE, FALSE, and UNKNOWN. An architect could cleverly assign 4-bit patterns to these states—for instance, FALSE = 0000, UNKNOWN = 0011, and TRUE = 1111.

What's brilliant about this specific (though hypothetical) choice is that the complex rules of ternary AND logic can now be implemented with a single, standard bitwise AND operation. For example, TRUE AND UNKNOWN should be UNKNOWN. In our representation, this is 1111 0011, which indeed yields 0011!. This demonstrates a profound principle: the bit patterns we use are a form of encoding, and with a clever encoding scheme, we can make simple binary hardware perform the logic of a completely different, more complex system.

A Concluding Thought

Our tour through the world of one's complement has taken us from the humble NOT gate to the abstract beauty of encoding ternary logic. We have seen its elegant solution for subtraction, its treacherous pitfalls with simple shortcuts, and the clever fixes engineers devised to tame its eccentricities. The story of one's complement is a powerful reminder that in the world of computing, mathematics is not a spectator sport. Every rule, every property, every quirk of an abstract system has real, tangible consequences, creating a rich tapestry of challenges and opportunities for those who build the machines that shape our world.