try ai
Popular Science
Edit
Share
Feedback
  • Key Generation

Key Generation

SciencePediaSciencePedia
Key Takeaways
  • Secure key generation begins with harvesting true randomness from physical sources and refining it using mathematical tools like randomness extractors and key derivation functions.
  • The security of asymmetric systems like RSA relies not just on hard mathematical problems but also on careful parameter selection to prevent attacks that exploit implementation shortcuts.
  • Effective key management involves a full lifecycle, from hardware-level generation (e.g., PUFs, key ladders) to system-level policies (e.g., KMS, envelope encryption) that ensure confidentiality and control.
  • Real-world key generation balances security requirements against performance and energy costs, while preparing for future threats like quantum computing through hybrid, post-quantum cryptographic methods.

Introduction

The generation of secret keys is the foundational act of digital security. These secret strings of numbers are the bedrock of privacy and trust, enabling everything from secure online banking to confidential communications. But in a world where information can be copied perfectly and instantaneously, how do we create a secret that is truly unpredictable and unique? This fundamental challenge marks the beginning of a journey from the chaotic randomness of the physical world to the structured elegance of pure mathematics. This article explores the principles, mechanisms, and real-world applications of key generation, providing a map of this critical cryptographic landscape.

The "Principles and Mechanisms" section will delve into the heart of what makes a key secure. We will explore the quest for true randomness, the mathematical tools used to purify it, and the theoretical perfection of the One-Time Pad. We will then examine the structured beauty of public-key systems like RSA and Diffie-Hellman, uncovering the computational asymmetries they rely on and the subtle pitfalls that can lead to their catastrophic failure. The "Applications and Interdisciplinary Connections" section will then show these principles in action. We will see how keys are born in silicon hardware, managed by operating systems, and orchestrated at scale in complex environments, revealing the constant trade-offs between security, performance, and usability, and finally, look ahead to the quantum-resistant keys of the future.

Principles and Mechanisms

Imagine we want to create a secret key. What is the essence of such a thing? It is a piece of information, a number, that only we know. Its power comes from its unpredictability. If an adversary can guess it, our secret is worthless. So, the journey of key generation is, at its heart, a quest for perfect, unadulterated unpredictability. It’s a fascinating trip that takes us from the chaotic noise of the physical world to the sublime, rigid structures of pure mathematics.

The Soul of a Secret: The Quest for True Randomness

Where can we find unpredictability? Nature is a good place to start. The exact timing of radioactive decays, the thermal noise in a semiconductor, or the jitter in a computer’s oscillator are all phenomena that are, for all practical purposes, fundamentally random. Devices that harness these physical processes are called ​​True Random Number Generators (TRNGs)​​. They are our direct line to the universe's inherent chaos.

But this raw material is often imperfect. Let’s say our TRNG is based on a process that produces a 1 with a probability of p=0.6p=0.6p=0.6 and a 0 with a probability of 1−p=0.41-p=0.41−p=0.4. This is like a biased coin. It's random, yes, but it's not uniformly random. An adversary who knows this bias has an advantage.

How do we quantify this "amount of randomness"? In information theory, we have two important ideas. The first is ​​Shannon entropy​​, which measures the average surprise or unpredictability of an outcome. For our biased coin, the Shannon entropy is about 0.970.970.97 bits per flip. This is less than the perfect 111 bit you'd get from a fair coin, reflecting the bias.

However, for cryptography, we are pessimists. We care about the worst-case scenario. The measure for this is ​​min-entropy​​. It’s defined by the most likely outcome. In our case, the most likely outcome is a 1 (with probability 0.60.60.6), so the min-entropy is −log⁡2(0.6)-\log_2(0.6)−log2​(0.6), which is approximately 0.740.740.74 bits. This lower value tells us that in the worst case, the unpredictability we can count on from each bit is only 0.740.740.74 bits. To build a key with 128128128 bits of security, we can't just take 128128128 of these biased bits. We would need at least ⌈128/0.737⌉=174\lceil 128 / 0.737 \rceil = 174⌈128/0.737⌉=174 raw bits to guarantee we have accumulated at least 128128128 bits of worst-case unpredictability.

So we have this pile of biased, "low-grade" random bits. How do we purify it? We use a mathematical tool called a ​​randomness extractor​​, often implemented with a cryptographic hash function. You can think of it like a refinery. We feed in our long string of 174 (or more) biased bits, and the extractor compresses it, squeezing out the bias and correlations, to produce a shorter, but uniformly random and high-quality output—our cryptographic seed. This process, known as ​​entropy conditioning​​, doesn't create new randomness, but it concentrates the existing randomness into a pure and usable form.

The Copy-Paste Catastrophe: Why Identical Isn't Secure

Once we have this precious, high-quality seed, we can use it to power a ​​Deterministic Random Bit Generator (DRBG)​​, also called a Cryptographically Secure Pseudorandom Number Generator (CSPRNG). A DRBG is an algorithm that takes a seed and deterministically expands it into a very long sequence of bits that look random. The key word here is deterministic. Given the same seed, a DRBG will always produce the exact same output stream. Its security rests entirely on the secrecy and unpredictability of its initial seed.

This leads to a fascinating and dangerous failure mode in modern computing, especially in virtualized environments. Imagine a datacenter administrator creates a virtual machine (VM) image and then clones it 100 times to create a fleet of servers, all booting up at the same instant. If the operating system on that image generates its unique SSH host key at the first boot, where does it get its randomness from? At the moment of boot, a headless server has very few sources of entropy—no mouse movements, no keyboard clicks. If the initial state of the random number generator is identical in all clones, and they all execute the same boot sequence in lock-step, they will all generate the exact same "unique" SSH key.

What if there's a little bit of entropy, but not much? Suppose each machine manages to gather a seed with only H=12H=12H=12 bits of entropy. This means each machine's seed is one of 212=40962^{12} = 4096212=4096 possibilities. Now, we have m=120m=120m=120 machines independently picking a seed. What is the chance that at least two of them pick the same one? This is the famous "birthday problem". The probability of a collision is surprisingly high. With 120120120 instances drawing from a pool of only 409640964096 seeds, the probability of at least one collision is over 80%. The result is multiple servers, which are supposed to be distinct, sharing the same secret key—a security disaster. This is why modern systems have developed robust solutions, like blocking key generation until enough entropy is gathered or having the virtual machine host inject a unique, random seed into each clone at creation.

The Unbreakable Code: Perfection in the One-Time Pad

So far, we've focused on generating a string of perfectly random bits. What's the best thing we can do with it? The theoretical pinnacle of cryptography is the ​​One-Time Pad (OTP)​​. The idea is stunningly simple: to encrypt a message, you generate a secret key that is a truly random string of bits of the same length as the message. The encryption is just a bitwise XOR of the message and the key.

For this system to achieve what Claude Shannon called ​​perfect secrecy​​—meaning the ciphertext reveals absolutely zero information about the plaintext—a few ironclad rules must be followed:

  1. ​​The key must be truly and uniformly random.​​ Every possible key must be equally likely. If some keys were more probable, an attacker could simply try them first. This connects to our quest for pure randomness. If our key is chosen from a flawed set of size 2n−k2^{n-k}2n−k instead of the full set of 2n2^n2n possibilities, an attacker who knows this flaw can learn exactly kkk bits of information about our message, no more and no less.
  2. ​​The key must be at least as long as the message.​​ Shannon proved that for perfect secrecy, the key space must be at least as large as the message space.
  3. ​​The key must never, ever be reused.​​ If the same key KKK is used to encrypt two different messages, M1M_1M1​ and M2M_2M2​, an attacker who gets both ciphertexts, C1=M1⊕KC_1 = M_1 \oplus KC1​=M1​⊕K and C2=M2⊕KC_2 = M_2 \oplus KC2​=M2​⊕K, can compute C1⊕C2=(M1⊕K)⊕(M2⊕K)=M1⊕M2C_1 \oplus C_2 = (M_1 \oplus K) \oplus (M_2 \oplus K) = M_1 \oplus M_2C1​⊕C2​=(M1​⊕K)⊕(M2​⊕K)=M1​⊕M2​. This reveals the XOR of the two plaintexts, which is a massive information leak.

The nature of true randomness can be wonderfully counter-intuitive. Imagine you have a stream of perfectly random bits, U0,U1,U2,…U_0, U_1, U_2, \dotsU0​,U1​,U2​,…. What if you create a new key stream by XORing adjacent bits: Ki=Ui⊕Ui−1K_i = U_i \oplus U_{i-1}Ki​=Ui​⊕Ui−1​? You are mixing the bits together. Surely this must ruin the randomness by creating correlations? Incredibly, it does not. The resulting stream K1,K2,…K_1, K_2, \dotsK1​,K2​,… is also a sequence of perfectly random, independent, and uniformly distributed bits. This demonstrates the profound and robust nature of what it means to be truly random.

The Art of Asymmetry: Keys with Structure

The One-Time Pad is perfect, but it's wildly impractical. The need for a pre-shared key as long as the message is a logistical nightmare. This led to one of the greatest revolutions in cryptography: public-key systems. Here, keys are not just random strings; they are numbers with deep and beautiful mathematical structure.

The most famous example is the ​​RSA cryptosystem​​. The process of generating an RSA key is a masterpiece of computational asymmetry. We need to find two very large prime numbers, ppp and qqq. Finding these primes is, relatively speaking, "easy." But if you multiply them together to get n=pqn = pqn=pq, the reverse problem—finding ppp and qqq given only nnn—is believed to be extraordinarily "hard". The security of RSA rests on this chasm between the ease of multiplication and the difficulty of factorization.

How do we find a large prime? We don't have a formula for generating them. We resort to a simple "guess and check" strategy. The Prime Number Theorem tells us that primes are not exceedingly rare. For a kkk-bit number, roughly 111 in every kln⁡2k \ln 2kln2 integers is prime. So, we can just pick a random large odd number and test if it's prime.

But how do we test for primality? A simple approach is trial division: check for divisibility by all primes up to the square root of the number. But for a 2048-bit number, this is impossible. The smallest composite number that has no prime factors less than or equal to 404040 is already 412=168141^2 = 1681412=1681, illustrating how quickly this method fails for numbers with large prime factors.

A more advanced idea is to use Fermat's Little Theorem, which states that if ppp is prime, then for any aaa not divisible by ppp, ap−1≡1(modp)a^{p-1} \equiv 1 \pmod pap−1≡1(modp). We can turn this into a test: pick a few bases aaa and check if the congruence holds. Unfortunately, there exist treacherous composite numbers, called ​​Carmichael numbers​​, that satisfy this congruence for all bases aaa to which they are coprime. The smallest such number is 561=3×11×17561 = 3 \times 11 \times 17561=3×11×17, but others exist that can be constructed to fool specific tests, like n=1105=5×13×17n=1105=5 \times 13 \times 17n=1105=5×13×17. These deceivers force us to use more sophisticated probabilistic primality tests, like the Miller-Rabin test, which can provide astronomically high confidence that a number is indeed prime.

From Shared Secret to Symmetric Key: The Final Polish

Public-key systems give us another magical ability: creating a shared secret over an open channel. In the ​​Diffie-Hellman key exchange​​, two parties, Alice and Bob, can agree on a secret number S≡gab(modp)S \equiv g^{ab} \pmod pS≡gab(modp) while an eavesdropper who sees all their public communication (ga(modp)g^a \pmod pga(modp) and gb(modp)g^b \pmod pgb(modp)) cannot compute it.

But what is this secret value SSS? It's an integer between 111 and p−1p-1p−1, but it's not a uniformly random number. It's an element of a special mathematical group, and its distribution has structure. Using the raw bits of SSS directly as a symmetric key is a mistake, as its non-uniformity could introduce subtle weaknesses.

This brings us full circle. Just as we needed to refine the raw, biased output of a TRNG, we need to refine this structured, non-uniform shared secret. We once again turn to an extractor, but this time it's called a ​​Key Derivation Function (KDF)​​. A KDF, like HKDF, takes the shared secret SSS as its input and, through cryptographic hashing, extracts its entropy to produce one or more fresh, clean, uniformly random symmetric keys of any desired length. It's the final polishing step that turns a mathematical curiosity into a practical cryptographic tool.

A Word of Caution: The Perils of Small Numbers

The generation of a key is not just about the core algorithm; it is a protocol with precise rules. Deviating from these rules, even in seemingly minor ways, can lead to total collapse.

Consider RSA again. The private key ddd is related to the public key (n,e)(n,e)(n,e) by the equation ed≡1(modφ(n))ed \equiv 1 \pmod{\varphi(n)}ed≡1(modφ(n)), where φ(n)=(p−1)(q−1)\varphi(n)=(p-1)(q-1)φ(n)=(p−1)(q−1). This implies that eφ(n)−kd\frac{e}{\varphi(n)} - \frac{k}{d}φ(n)e​−dk​ is a very small number for some integer kkk. Because φ(n)\varphi(n)φ(n) is very close to nnn, this means kd\frac{k}{d}dk​ is a very good rational approximation of en\frac{e}{n}ne​.

What if, for efficiency, someone chose a very small private key ddd? It turns out that this is a catastrophic mistake. A beautiful theorem by Michael Wiener shows that if ddd is smaller than about 13n1/4\frac{1}{3}n^{1/4}31​n1/4, then the fraction kd\frac{k}{d}dk​ will appear as one of the "convergents" in the continued fraction expansion of the public ratio en\frac{e}{n}ne​. An attacker can simply compute this expansion, test each convergent as a candidate for kd\frac{k}{d}dk​, and recover the private key ddd in moments. This is a stunning example of how a deep mathematical property—the power of continued fractions to find best rational approximations—can be turned into a devastating attack. It's a stark reminder that in cryptography, the beauty of the mathematics is matched only by the unforgiving precision required to implement it securely.

Applications and Interdisciplinary Connections

We have seen the beautiful mathematical machinery that allows us to generate keys—those secret strings of numbers that are the bedrock of digital privacy. But like a powerful engine still on the factory floor, a key is only potential energy. Its true purpose, its kinetic life, is only revealed when it is put to work. Where do these keys live? How do they shape our world?

Our journey to answer this begins not in the abstract realm of mathematics, but in the very heart of the machine: the silicon of a computer chip. From there, we will travel up through the layers of software that give the machine its purpose, venture into the complex human systems that rely on digital trust, and finally, gaze into the strange and uncertain future of computing itself. We will discover that key generation is not a monolithic act, but a dynamic, living process that connects physics, engineering, law, and even ethics.

The Silicon Soul: Keys Born in Hardware

Imagine you want to keep a secret. The first rule is: don't write it down where others can see it. For a computer, the main memory—the DRAM chips that hold its working thoughts—is like a public square. An adversary with physical access can simply "listen in" on the electrical signals traveling to and from memory. How can a computer keep secrets from such an eavesdropper? It must learn to speak in code, even to itself.

This is the principle behind full memory encryption. Modern processors now include a special Memory Encryption Engine, a tiny cryptographic guardian that sits between the processor and its memory. Before any piece of data leaves the safety of the chip, the engine encrypts it. When the data returns, the engine decrypts it. The computer's "thoughts" are confidential, even when they travel through the public square of the memory bus.

But which key should it use? If it used the same key forever, a compromise would be catastrophic. Instead, engineers devised a beautifully elegant solution called a ​​key ladder​​. Deep inside the chip is a root key, KrootK_{\text{root}}Kroot​, etched into its very being in a way that it can never be altered or read out—a secret the chip is born with. Each time the computer boots up, it uses this root key to forge a brand new, ephemeral ​​session key​​, KsessionK_{\text{session}}Ksession​. It does this by combining KrootK_{\text{root}}Kroot​ with a dash of pure randomness, a nonce generated by a true random number generator on the chip. This KsessionK_{\text{session}}Ksession​ exists only for the current working session and is held in a special register that is wiped clean on reset. It's this session key that is then used to encrypt the memory traffic.

This simple, two-step generation process—Kroot→KsessionK_{\text{root}} \to K_{\text{session}}Kroot​→Ksession​—provides a profound security property called ​​forward secrecy​​. Even if an attacker somehow compromises the system and discovers the current session key, all past sessions, encrypted with different, long-vanished keys, remain secure. Furthermore, it defends against ​​replay attacks​​, where an adversary records old encrypted memory contents and tries to feed them back to the processor. Because the session key is fresh on every boot, the old data is just meaningless gibberish. The key generation process has given the computer a short-term memory, protecting its present from its past.

We can take this connection between the physical and the digital even further. What if a key wasn't stored at all, but was an intrinsic, unclonable property of the hardware itself? This is the fascinating idea behind ​​Physical Unclonable Functions​​, or PUFs. During the manufacturing of a silicon chip, microscopic variations—the precise position of atoms, the thickness of wires—create a unique, chaotic pattern. A PUF is a circuit designed to turn this microscopic randomness into a consistent digital "fingerprint." When prompted, it produces a unique binary string, but because the underlying structure is so complex and random, it is physically impossible to create a clone of the device that produces the same string.

The challenge, of course, is that the physical world is noisy. Temperature fluctuations or tiny voltage drops can cause the PUF to produce a slightly different string each time—a "noisy" reading. You can't use a noisy key for cryptography. The solution is a masterpiece of information theory called a ​​fuzzy extractor​​. It's a two-part algorithm. At enrollment, it takes one noisy reading and produces two things: a stable, perfectly random secret key, RRR, and a piece of public "helper data," WWW. The helper data is like a map of the noise. Later, you can feed a new, slightly different noisy reading along with the helper data into the second part of the algorithm, and it will magically reproduce the exact same secret key, RRR. From this one stable secret, a Key Derivation Function like HKDF can then generate a whole family of keys for different purposes, such as a long-term device identity key and ephemeral session keys. Here, key generation becomes a bridge from the messy, analog reality of physics to the pristine, digital world of cryptography.

The Operating System's Burden: A Key for Every File

Moving up from the hardware, we encounter the operating system (OS), the master coordinator of all software. One of its jobs is to manage files—potentially millions of them. If we want to encrypt every file with a unique key, where do all these keys come from?

A clever-sounding idea is to derive a key for each file from a master key and the file's unique identifier, its "inode number." The inode is a serial number the filesystem uses to track a file's metadata. The formula would be simple: Kfile=KDF(Kmaster,inode)K_{\text{file}} = \mathrm{KDF}(K_{\text{master}}, \text{inode})Kfile​=KDF(Kmaster​,inode). This seems efficient and elegant.

But here lies a subtle trap, a cryptographic ghost story. In most filesystems, when you delete a file, its inode number is eventually recycled and assigned to a new file. Imagine you create a file, write a secret in it, and then delete it. Later, you create a new, unrelated file, and the OS happens to give it the same old inode number. It will be encrypted with the exact same key as your deleted secret. An adversary who could recover the "ghost" data of the old ciphertext from the disk and also see the new ciphertext could compare them. As we've seen, when two different messages are encrypted with the same key and nonce (in a stream cipher), the encryption can be peeled away, revealing the relationship between the two plaintext messages. This is a catastrophic failure of confidentiality.

The lesson is profound: the inputs to your key generation function must be as unique and as long-lived as the secrecy you require. The solution is to augment the inode with something truly unique, like a random "salt" generated for each new file or an "inode generation number" that increments every time an inode is reused. Key generation, we learn, is not just about producing randomness, but about carefully managing uniqueness.

The Price of Privacy: Performance and Engineering Trade-offs

A natural question arises from all this talk of encryption: doesn't it slow everything down? If we are encrypting every block of memory, every file, every network packet, surely our computers must grind to a halt.

Here we have a wonderful surprise. While encryption is not free, modern processors are extraordinarily fast. Let's consider reading an encrypted file from a hard drive. The process of getting the data involves moving a physical read/write head, waiting for the disk to spin to the right sector, and then transferring the data. These physical actions might take several milliseconds (1 ms=0.001 s1 \text{ ms} = 0.001 \text{ s}1 ms=0.001 s). Now, how long does it take a modern CPU to decrypt that same block of data? The answer is often on the order of a few microseconds (1μs=0.000001 s1 \mu\text{s} = 0.000001 \text{ s}1μs=0.000001 s), a thousand times faster. The cryptographic overhead is so small it is utterly dwarfed by the mechanical latency of the disk. The time spent on key derivation and decryption is lost "in the noise" of normal operation.

This remains true even for the complex, next-generation algorithms needed to protect against quantum computers. A full handshake to establish a hybrid post-quantum secure channel might take a few milliseconds in total—a one-time cost—and the per-packet encryption overhead after that remains in the microseconds.

This doesn't mean performance is irrelevant. In small, battery-powered devices in the Internet of Things (IoT) or Cyber-Physical Systems (CPS), every CPU cycle consumes precious energy. Here, key generation becomes a problem of engineering trade-offs. We must operate within a "security budget". How often should a device establish a new session key? Doing so frequently provides better security by limiting the "window of opportunity" for an attacker, but each key establishment handshake consumes CPU time and energy. Doing so too infrequently saves power but might exhaust the pool of unique nonces for a given key or create an unacceptably large replay window. The right policy is a carefully calculated balance between the security requirements and the computational budget of the device.

Managing Trust at Scale: The Human Element

So far, our examples have lived inside a single machine. But our most important data lives in vast, distributed systems, accessed by thousands of people. Think of a hospital's electronic health record (EHR) system, holding the private medical histories of an entire community. Protecting this data is not just a technical problem; it is a legal and ethical imperative under regulations like HIPAA.

Here, the challenge is not just generating a key, but managing its entire ​​lifecycle​​. This lifecycle includes:

  • ​​Generation:​​ Creating the key with true randomness.
  • ​​Storage:​​ Protecting the key from theft.
  • ​​Rotation:​​ Periodically replacing the key to limit the damage if it is ever compromised.
  • ​​Revocation:​​ Immediately disabling a key if it is suspected of being compromised.
  • ​​Destruction:​​ Securely and irreversibly erasing the key at the end of its life.

To manage this lifecycle at scale, we use specialized technologies: ​​Hardware Security Modules (HSMs)​​ and ​​Key Management Systems (KMS)​​. An HSM is a physical fortress for keys—a dedicated, tamper-resistant appliance whose sole purpose is to generate, store, and use keys without ever letting them leave its secure boundary in plaintext. A KMS is the administrative brain. It's a service that orchestrates the key lifecycle, enforcing policies about who can use which key and for what purpose, creating detailed audit logs, and automating rotation schedules. The KMS is the banker who manages access to the vault; the HSM is the vault itself.

These systems enable powerful cryptographic patterns like ​​envelope encryption​​. Instead of using one master key to encrypt all the data, we generate a new, unique Data Encryption Key (DEK) for each piece of data (or each patient record). Then, we use a master Key Encryption Key (KEK), protected by the HSM, to encrypt each DEK. The encrypted DEK is stored alongside the encrypted data. It's like putting each secret in its own locked box (the DEK) and then putting the key to that box inside a sealed envelope (the KEK). To read a secret, you must first get permission from the KMS to use the master key to open the envelope and retrieve the box key. This provides incredible flexibility and control. To "crypto-shred" a patient's record, for instance, we don't have to overwrite petabytes of data; we simply destroy the one tiny DEK associated with it, and the data becomes permanently irrecoverable.

Gazing into the Quantum Crystal Ball

The story of key generation would be incomplete without a look to the future, and a looming storm on the horizon: the quantum computer. A sufficiently powerful quantum computer, using Shor's algorithm, will be able to break most of the public-key cryptography we use today for establishing secure connections.

This creates a chilling threat known as a ​​Harvest-Now, Decrypt-Later (HNDL)​​ attack. An adversary can record our encrypted communications today—our financial transactions, our state secrets, our private messages—and simply store them. They can't decrypt them now. But they are betting that in the future, once they have a quantum computer, they can use it to break today's key establishment handshake, reveal the session key, and decrypt the harvested data.

How do we protect today's secrets from tomorrow's computers? The answer is to start generating our keys in a new, quantum-resistant way. The cryptographic community is already standardizing a new generation of Post-Quantum Cryptography (PQC) algorithms. During this transition, the safest strategy is a ​​hybrid key exchange​​. When two parties establish a session key, they perform two key exchanges in parallel: one classical (like we use today) and one post-quantum. They then combine the results from both to generate the final session key. An adversary would have to break both the classical and the PQC algorithm to succeed. This gives us a bridge to the quantum future, ensuring our security as long as at least one of the two methods remains unbroken.

Furthermore, quantum computers also affect symmetric keys, albeit less dramatically. Grover's algorithm gives a quadratic speedup to brute-force attacks, effectively halving the bit-strength of a key. To maintain our current level of security, we must double the key sizes we use, for example by moving from 128-bit keys to 256-bit keys for symmetric ciphers like AES.

The journey of a key, we see, is a thread woven through our entire technological world. It begins in the fundamental randomness of physics, is forged by the logic of mathematics and the pragmatism of engineering, is governed by the policies of our social and legal systems, and now stretches into the uncertain frontiers of quantum mechanics. It is a constant, evolving quest to create certainty and trust in a world of endless complexity.