try ai
Popular Science
Edit
Share
Feedback
  • Semiconductor Device Simulation

Semiconductor Device Simulation

SciencePediaSciencePedia
Key Takeaways
  • Semiconductor simulation is a two-act play, combining process simulation (virtual manufacturing) with device simulation (virtual testing) to ensure physical consistency.
  • The classical drift-diffusion model forms the core of device simulation by coupling Poisson's equation with continuity equations that describe electron and hole movement.
  • For modern nanoscale transistors, quantum effects like confinement are critical and are captured using computationally efficient "quantum potential" or "density-gradient" models.
  • Simulations are calibrated against experimental measurements to create predictive models, which then inform the development of compact models used for large-scale circuit design.

Introduction

The relentless miniaturization of electronics has pushed transistors to atomic scales, where their behavior is governed by complex physical phenomena. Designing these devices through costly trial-and-error fabrication is no longer feasible. This challenge gives rise to semiconductor device simulation, a powerful virtual laboratory that allows engineers to build and test transistors within a computer. This article addresses the fundamental question: how do we accurately model the intricate dance of electrons in a semiconductor to predict a device's real-world performance? The following chapters will first delve into the "Principles and Mechanisms," exploring the core physical equations, from the classical drift-diffusion model to the quantum corrections required for modern devices. Subsequently, the "Applications and Interdisciplinary Connections" section will reveal how these simulations serve as an indispensable bridge between manufacturing, experimental physics, and large-scale circuit design, making them a cornerstone of the technology we use every day.

Principles and Mechanisms

Imagine you want to build a fantastically complex clock, not with gears and springs, but with billions of microscopic switches called transistors. Before you head to the foundry to spend millions of dollars, you'd probably want a blueprint. More than a blueprint, you'd want a way to know exactly how it will behave once it's built. Semiconductor device simulation is that virtual laboratory—a world built of mathematics and physics inside a computer, where we can construct and test the most advanced electronics ever conceived.

But how do you build such a virtual world? You can't just draw a transistor and say "work!" You have to build it from the ground up, respecting the fundamental laws of nature. The principles and mechanisms of device simulation are a beautiful story of how physicists and engineers have learned to translate the intricate dance of electrons in a crystal into a set of solvable equations. It's a journey that takes us from the simulated factory floor to the strange world of quantum mechanics.

A Tale of two Simulations: Process and Device

The first key principle is to realize that a transistor's behavior is a direct consequence of how it was made. You can't separate the performance from the fabrication. Therefore, a complete simulation is a two-act play.

​​Act I: The Virtual Factory (Process Simulation)​​

In this first stage, the computer meticulously mimics the manufacturing process. It simulates the deposition of ultra-thin layers of materials, the etching of complex patterns using light, and the crucial step of ​​doping​​—implanting impurity atoms like boron or phosphorus into the silicon crystal to control its conductivity. This simulation doesn't just track the geometry; it solves equations for mass conservation and reaction kinetics to predict the final, precise location of every dopant atom. It even calculates the immense mechanical stresses and strains that build up in the material as it's processed. The output is not just a drawing, but a complete, three-dimensional digital replica of the transistor, complete with maps of its material composition, dopant concentrations, and internal stress fields.

​​Act II: The Virtual Test Bench (Device Simulation)​​

This is where the magic happens. The incredibly detailed structure from Act I becomes the stage for Act II. The device simulator takes this structure and applies the laws of electricity and quantum mechanics to predict its electrical behavior. It answers the crucial questions: How much current flows when we apply a voltage? How fast can it switch? How much power does it consume?

The beauty of this two-act structure is its guarantee of ​​physical consistency​​. The device we test is the device we built. Every nuance of the manufacturing process—a slight over-etch, a subtle variation in doping—is faithfully carried from the process simulator to the device simulator, ensuring our predictions are as close to reality as possible.

The Laws of Electronic Motion

So, what are these "laws" that the device simulator solves? At the heart of most device simulation lies a trio of coupled equations known as the ​​drift-diffusion model​​. Let's think of the semiconductor as a stage for an orchestra of charge carriers—negatively charged electrons and positively charged "holes" (which are really just the absence of an electron).

The Electrostatic Stage: Poisson's Equation

The landscape of this stage—its hills and valleys—is the electrostatic potential, ϕ\phiϕ. This potential is governed by ​​Poisson's equation​​:

∇⋅(ϵ∇ϕ)=−ρ\nabla \cdot (\epsilon \nabla \phi) = - \rho∇⋅(ϵ∇ϕ)=−ρ

where ϵ\epsilonϵ is the material's permittivity (its ability to store electric fields) and ρ\rhoρ is the total charge density. This equation is simply Gauss's law from introductory physics, stating that electric fields originate from charges. What makes it tricky is that the charge density ρ\rhoρ is the sum of all charges: the fixed ionized dopant atoms from the process simulation (ND+N_D^+ND+​ and NA−N_A^-NA−​), but also the mobile electrons (nnn) and holes (ppp) themselves!

ρ=q(p−n+ND+−NA−)\rho = q(p - n + N_D^+ - N_A^-)ρ=q(p−n+ND+​−NA−​)

This creates a self-consistent feedback loop: the carriers create a potential landscape, but that very landscape tells the carriers where to go. It's like a crowd of people on a trampoline; their collective weight creates the shape of the surface, which in turn causes them to roll towards the center. The simulator must find a stable solution where the carrier positions and the potential landscape are in perfect harmony.

The Dance of the Carriers: Drift and Diffusion

Once the stage is set, how do the carriers move? They follow two primary commands.

  1. ​​Drift:​​ This is the most intuitive motion. The electric field, E=−∇ϕ\mathbf{E} = -\nabla \phiE=−∇ϕ, which is the slope of the potential landscape, pushes on the charged carriers. Electrons are pushed "uphill" on the potential map, and holes are pushed "downhill." This is drift—an orderly march driven by the electric field.

  2. ​​Diffusion:​​ This is motion driven by chaos. If you have a high concentration of carriers in one spot, their random thermal jiggling will cause them to naturally spread out towards regions of lower concentration, just like a drop of ink spreading in water. This is diffusion.

The ​​drift-diffusion model​​ combines these two motions into a single expression for the electron current density (Jn\mathbf{J}_nJn​) and hole current density (Jp\mathbf{J}_pJp​):

Jn=qμnnE+qDn∇n\mathbf{J}_n = q \mu_n n \mathbf{E} + q D_n \nabla nJn​=qμn​nE+qDn​∇n
Jp=qμppE−qDp∇p\mathbf{J}_p = q \mu_p p \mathbf{E} - q D_p \nabla pJp​=qμp​pE−qDp​∇p

Here, μ\muμ is the mobility (how easily carriers drift in a field) and DDD is the diffusivity (how quickly they diffuse). These two are beautifully linked by the Einstein relation, D=μkBT/qD = \mu k_B T/qD=μkB​T/q, showing they are two sides of the same coin of thermal motion.

Keeping Count: The Continuity Equation

We have the landscape and the rules of motion. The final piece of the puzzle is simple accounting. This is the continuity equation, which states that the number of electrons in any tiny volume can only change for one of two reasons: either they flow in or out (the divergence of the current, ∇⋅J\nabla \cdot \mathbf{J}∇⋅J), or they are created or destroyed inside the volume.

∂n∂t=1q∇⋅Jn−U\frac{\partial n}{\partial t} = \frac{1}{q} \nabla \cdot \mathbf{J}_n - U∂t∂n​=q1​∇⋅Jn​−U
∂p∂t=−1q∇⋅Jp−U\frac{\partial p}{\partial t} = -\frac{1}{q} \nabla \cdot \mathbf{J}_p - U∂t∂p​=−q1​∇⋅Jp​−U

The term UUU represents the net recombination-generation rate. Sometimes, an electron and hole meet and annihilate each other, releasing their energy. This is recombination. Other times, energy (like from light in a solar cell) can create a new electron-hole pair. This is generation. The models for UUU can be quite complex themselves. For instance, at low carrier concentrations, recombination often happens at defects in the crystal (Shockley-Read-Hall recombination), a process that scales linearly with the carrier concentration. At very high concentrations, a three-body "billiard ball" collision can occur where two carriers cause a third to lose its energy (Auger recombination), a process that scales with the cube of the concentration. Simulators must account for these different mechanisms to be accurate across all operating conditions.

The Art of the Possible: Making the Simulation Work

Having the laws of physics is one thing; getting a computer to solve them for a complex 3D structure is another entirely. This is where physical intuition and numerical artistry come into play.

A Tale of Two Regions: The Quasi-Neutral Shortcut

If you look at the non-dimensionalized Poisson's equation, the charge term is multiplied by a very large number in most parts of the device. This means even a tiny imbalance between positive and negative charges would create enormous electric fields. Nature doesn't like that. So, in the boring bulk regions of the semiconductor, far from any junctions, the material maintains near-perfect charge neutrality: p−n+ND+−NA−≈0p - n + N_D^+ - N_A^- \approx 0p−n+ND+​−NA−​≈0.

This physical insight is a godsend for simulation. In these regions, we can replace the complex differential Poisson equation with this simple algebraic one. This shortcut, known as the quasi-neutral approximation, makes the system of equations much easier for a computer to solve, dramatically improving numerical stability and speed. It allows the simulator to focus its heavy-duty computational firepower on the interesting regions where charge is not neutral, like the p-n junctions that form the heart of the transistor.

Connecting to the Outside World: Boundary Conditions

A simulated device doesn't exist in a vacuum. It's connected to the outside world by metal contacts. Defining what happens at these boundaries is critically important. For an ideal ohmic contact, we assume the metal acts as an infinite reservoir of carriers, holding the semiconductor in a state of local thermal equilibrium right at the interface. This means we can precisely calculate the electrostatic potential and carrier concentrations at the boundary by enforcing two simple rules: the quasi-Fermi levels must align with the metal's Fermi level, and the region must be charge-neutral. This provides the firm Dirichlet boundary conditions needed to anchor the entire simulation and allow current to flow in and out in a physically meaningful way. This is just one example of the care that must be taken; correctly representing physics at the interfaces, whether between different semiconductor materials or between grid points in the simulation itself, requires elegant numerical techniques to ensure fundamental laws like current conservation are never violated.

When the Classical World Fails: Entering the Quantum Realm

For decades, the drift-diffusion model was the undisputed king of device simulation. But as transistors shrank to sizes of just a few dozen atoms across, a strange new world began to emerge: the world of quantum mechanics.

The Quantum Squeeze

In the classical picture, the electrons in a transistor's channel are most concentrated right at the interface with the gate insulator. But electrons are not just point particles; they are also waves. When you confine a wave to a very narrow space—like the channel of a modern FinFET transistor—it behaves in a peculiar way. The wave cannot exist right at the hard wall of the interface. Its energy increases, and the peak of its probability density is pushed away from the interface. This is quantum confinement. This seemingly small shift has big consequences: it makes the gate look like it's farther away, reducing its control over the channel and shifting the transistor's threshold voltage. Our classical model was now officially wrong.

A Clever Fix: Quantum Potentials

Does this mean we have to throw away our trusted drift-diffusion framework and solve the full, monstrously complex Schrödinger equation for every electron? Thankfully, no. Physicists devised a brilliant patch. They asked, "Can we add a new term to our classical model that mimics this quantum behavior?" The answer is the quantum potential or density-gradient model.

You can think of this as giving each electron a personal space bubble. The quantum potential is essentially a repulsive force that pushes electrons away from regions where their concentration (their wave function) changes too abruptly, like at a sharp interface. This force is derived from the fundamental operators of quantum mechanics, like the momentum operator p^=−iℏ∇\hat{\mathbf{p}} = -i\hbar\nablap^​=−iℏ∇ and the effective-mass Hamiltonian H^=p^2/(2m∗)+V\hat{H} = \hat{\mathbf{p}}^2/(2m^*) + VH^=p^​2/(2m∗)+V. The result is a modified set of equations that still looks much like the drift-diffusion model and can be solved with similar techniques, but now magically reproduces the correct, quantum-mechanical charge distribution.

These quantum correction models are a masterpiece of pragmatism. They are not an exact solution to quantum mechanics, but an approximation. Their parameters are often carefully calibrated by comparing their results to more fundamental (and much slower) quantum transport solvers. But they are a stunningly effective way to extend the life of our classical simulation framework into the nano-era, capturing the essential quantum effects while remaining computationally feasible.

This hierarchy of models—from process to device, from classical to quantum-corrected—forms the foundation of modern semiconductor device simulation. This entire, elaborate TCAD simulation often serves one ultimate purpose: to generate the data needed to build much simpler, computationally instantaneous ​​compact models​​. These are the models that circuit designers use in tools like SPICE to simulate the behavior of an entire chip with billions of transistors—a topic for another day, but one that rests firmly on the shoulders of the physical principles we have just explored.

Applications and Interdisciplinary Connections

Having journeyed through the fundamental principles that govern the flow of charge in semiconductors, we now arrive at a pivotal question: What is all this for? The intricate dance of electrons and holes, described by the elegant yet formidable equations of transport and electrostatics, is not merely an academic curiosity. It is the very foundation upon which our digital world is built. But how do we bridge the vast chasm between the abstract laws of physics and the tangible reality of a functioning microchip with billions of transistors? The answer lies in the art and science of semiconductor device simulation.

This is not just about plugging numbers into a computer. It is a virtual laboratory, a digital foundry where we can forge transistors in the silicon ether, test them, break them, and perfect them before a single atom is deposited in a real factory. In this chapter, we will explore how device simulation serves as a grand unifier, weaving together the disparate threads of manufacturing, experimental physics, statistical theory, and circuit design into a coherent tapestry of modern technology.

From Blueprint to Reality: The Digital Twin of a Transistor

Imagine building a modern skyscraper. An architect's beautiful sketch is not enough; engineers must simulate the stresses, the wind loads, and the material properties to ensure the structure stands. The creation of a transistor, a skyscraper of the atomic realm, is no different. Technology Computer-Aided Design (TCAD) is the essential engineering discipline that simulates this entire process.

The journey begins in a virtual representation of the fabrication facility. Here, we don't solve for currents and voltages yet; we simulate the very manufacturing steps themselves. We model the intricate process of photolithography, where light carves patterns into a resist. We simulate the directional etching that digs trenches into the silicon with nanometer precision. We model the delicate, layer-by-layer growth of insulating films via Atomic Layer Deposition (ALD). The output of this process simulation is not a working device, but a highly detailed three-dimensional map—a digital twin of the transistor's anatomy. This map meticulously details the final geometry, the exact boundaries between different materials, and the spatial concentration of dopant atoms that have been implanted and diffused throughout the structure.

This detailed map becomes the input for the device simulation. To bring the virtual transistor to life and predict its electrical behavior, the simulator must be given a well-posed problem. This includes not just the final geometry, but also the physical properties associated with each region, such as the spatially varying permittivity ε(r)\varepsilon(\mathbf{r})ε(r) and the all-important doping profiles ND(r)N_{D}(\mathbf{r})ND​(r) and NA(r)N_{A}(\mathbf{r})NA​(r). Furthermore, we must specify the boundary conditions: how will this device connect to the outside world? This involves defining the properties of the metal contacts, such as their work functions (ΦM\Phi_{M}ΦM​), which set the electrical potential, and the nature of the artificial boundaries of our simulation domain, which are typically set to prevent any unphysical influence on the device's operation. Only with this complete set of information, passed seamlessly from the process simulation, can the fundamental equations of device physics—Poisson's equation and the drift-diffusion model—be solved to yield a meaningful prediction of the transistor's performance.

Confronting the Complexity of the Nanoscale

As transistors have shrunk to staggering dimensions, with features measured in mere dozens of atoms, their behavior has become bewilderingly complex. The simple, idealized models of introductory textbooks begin to fail. Here, device simulation becomes an indispensable tool for navigating the many-body problem of a modern transistor, where everything seems to affect everything else.

One of the most beautiful examples of this complexity is the interplay between mechanical stress and electrical performance. In a state-of-the-art device like a 7 nm7\,\mathrm{nm}7nm FinFET, engineers intentionally introduce mechanical stress into the silicon channel to enhance its performance—a technique known as strain engineering. This is often achieved by embedding materials like silicon-germanium in the source and drain regions, which have a different natural lattice spacing than silicon and thus squeeze or stretch the channel. This mechanical strain, represented by a stress tensor σ(x)\boldsymbol{\sigma}(\mathbf{x})σ(x), alters the energy band structure of the silicon, which in turn changes the mobility μ\muμ of the electrons and holes flowing through it. A predictive simulation of such a device is no longer a purely electrical problem; it is a multiphysics challenge. To accurately calculate the current, the simulation must first solve the equations of solid mechanics to find the stress field, and then use that field to modify the parameters of the electrical transport equations at every point in the device.

Beyond intentional design, simulation also helps us understand the impact of unavoidable imperfections. During fabrication, defects inevitably form, particularly at the critical interface between the silicon channel and the gate insulator. These can take the form of fixed charges (QfQ_fQf​) trapped in the oxide, or a spectrum of interface traps (Dit(E)D_{it}(E)Dit​(E)) that can capture and release carriers. These defects are not mere curiosities; they have a direct, and often detrimental, effect on the transistor's behavior. By incorporating these defects as boundary conditions in the simulation, we can precisely quantify their impact. For instance, simulation shows that these interface charges cause a shift in the threshold voltage (VTV_TVT​), the voltage required to turn the transistor on. They also degrade the subthreshold slope (SSS), a measure of how effectively the transistor can be turned off, leading to unwanted leakage current. This link between process-induced defects and key device metrics allows engineers to diagnose manufacturing problems and set tolerances on process quality.

The Art of Calibration: Where Simulation Meets Experiment

A simulation, no matter how sophisticated, is only a model of reality. Its predictive power hinges on the accuracy of the physical parameters used in its equations. How do we determine the correct value for the minority carrier lifetime, or the precise band alignment at a heterojunction? We cannot simply look them up in a textbook; they depend on the unique details of the fabrication process. The answer is to calibrate the simulation against experimental measurements of real devices. This process represents a beautiful dialogue between theory and experiment.

Consider the task of building a model for a novel device like a Tunnel FET (TFET) or a high-power diode. The workflow is a masterful application of the scientific method.

First, we fabricate the device. Then, we take it to the lab and measure its electrical characteristics extensively. We measure its current-voltage (III-VVV) characteristics using short pulses to avoid the confounding effect of the device heating up. We measure its capacitance-voltage (CCC-VVV) characteristics at different frequencies and temperatures.

Each of these measurements provides a clue to a specific piece of the device's internal physics. For example, by measuring capacitance at a high frequency, we ensure that slow-moving charge traps at interfaces don't have time to respond, allowing us to isolate and measure the depletion capacitance, which tells us about the doping profile. By comparing this to a low-frequency measurement where the traps can respond, we can deduce the trap density DitD_{it}Dit​. Similarly, to isolate the quantum-mechanical band-to-band tunneling (BTBT) current that is the primary mechanism of a TFET, we can perform III-VVV measurements at very low temperatures. This effectively "freezes out" temperature-dependent leakage mechanisms like Shockley-Read-Hall (SRH) recombination, giving us a clean signal of the tunneling process.

With this wealth of experimental data in hand, we return to the simulator. We run simulations and adjust the unknown model parameters—band offsets, carrier lifetimes, mobility models—until the simulated curves for III-VVV and CCC-VVV perfectly match the measured data across all temperatures and biases. This painstaking process of calibration transforms the simulation from a qualitative tool into a quantitatively predictive powerhouse.

From Devices to Circuits: Bridging the Scales

Simulating a single transistor with TCAD is computationally intensive. Simulating an entire microchip with billions of transistors is an impossibility. To bridge this vast difference in scale, we rely on an abstraction known as a ​​compact model​​. A compact model, like the widely used BSIM family, is a set of analytical equations that captures the behavior of a transistor, allowing circuit simulators like SPICE to analyze large circuits efficiently. Device simulation plays a crucial role in creating and informing these compact models.

A key challenge is that a transistor's behavior is not intrinsic; it depends on its local environment on the chip. For instance, the Well Proximity Effect (WPE) arises because the doping concentration near the edge of a well is not uniform, causing a transistor's threshold voltage to shift depending on its distance, dwd_wdw​, to the well edge. Similarly, the mechanical stress from the Shallow Trench Isolation (STI) structures that separate transistors changes the carrier mobility depending on the distance dsd_sds​ to the isolation boundary. Device simulation allows us to characterize these layout-dependent effects (LDEs). The result is not a single set of model parameters, but a sophisticated model that knows how to adjust a transistor's VthV_{th}Vth​ and μ\muμ based on its specific geometric context, which is extracted from the final chip layout.

Furthermore, the very structure of these compact models is deeply rooted in physical principles. A crucial breakthrough in modern modeling was the move to charge-based formulations. Why? The fundamental law of charge conservation demands that the sum of all currents flowing into and out of a device must be zero at all times. Early current-based models, which treated capacitances as independent add-ons, could fail this test, leading to simulations where charge was mysteriously created or destroyed. A modern charge-based model, by contrast, is built around a set of terminal charge functions (QG,QD,QS,QBQ_G, Q_D, Q_S, Q_BQG​,QD​,QS​,QB​) that, by construction, sum to zero. The time-varying displacement currents are then calculated as the derivatives of these charges (Ik=dQk/dtI_k = dQ_k/dtIk​=dQk​/dt). This elegant formulation mathematically guarantees that charge is conserved, a vital property for accurate transient and AC simulations. This shows how deep physical principles must be respected even when we move to higher levels of abstraction.

Embracing Randomness: The Statistical Frontier

If we could build two truly identical transistors, they would behave identically. But at the atomic scale, we cannot. The number and exact position of dopant atoms in a channel, the precise grain structure of the metal gate—these are random variables. As a result, every transistor is unique, a phenomenon known as variability. Device simulation is our primary tool for understanding and predicting this randomness.

A central concept from probability theory, the ​​Central Limit Theorem (CLT)​​, tells us that when we add up many small, independent random contributions, their sum tends to follow a Gaussian, or bell curve, distribution. Since the threshold voltage (VthV_{th}Vth​) of a transistor is affected by numerous independent sources of variation (dopant fluctuations, line-edge roughness, oxide thickness variations), it is often reasonable to assume that its distribution will be approximately Gaussian. This powerful assumption simplifies statistical circuit analysis enormously.

However, the Feynman spirit encourages us to ask, "When does this assumption fail?" Device simulation provides the answer. In extremely small transistors, the total number of dopant atoms in the channel might be only a handful—say, 10 or 20. The CLT's many contributions assumption breaks down. The discreteness of the dopant count leads to a VthV_{th}Vth​ distribution that is noticeably skewed and non-Gaussian. In another example, the metal used for the gate is composed of microscopic crystal grains. If the gate area is small enough to be dominated by just one or two grains, and these different grain orientations have distinct work-function values, the resulting VthV_{th}Vth​ distribution across many devices can become bimodal—having two peaks—a clear deviation from the single-peaked Gaussian curve. Simulation allows us to explore these fascinating non-Gaussian regimes that are invisible to simpler models but critical for the reliability of modern technology.

To perform these statistical studies, we run an "ensemble" of simulations. For each run, the simulator generates a new, random configuration of dopants and calculates the resulting VthV_{th}Vth​. By running hundreds or thousands of these virtual experiments, we can build up a distribution and calculate its standard deviation, σVT\sigma_{V_T}σVT​​. But how many runs are enough? Here again, simulation connects with profound statistical theory. Using asymptotic methods, we can derive a formula that tells us the minimum ensemble size NNN required to estimate σVT\sigma_{V_T}σVT​​ to a desired precision δ\deltaδ with a specified confidence level. This criterion depends on the kurtosis (β2\beta_2β2​) of the underlying distribution—a measure of its tailedness—and provides a rigorous foundation for this computationally intensive but essential analysis.

The Strategy of Simulation

Finally, simulation is not a monolithic black box. It is a toolbox, and its effective use requires physical intuition and engineering judgment. A key decision is the choice of dimensionality. Must every simulation be a full, computationally expensive 3D model?

The answer depends on the physics of the device in question. For a wide, planar transistor, where the device is largely uniform along its width, the electric fields and current flow are predominantly two-dimensional (along the channel length and into the substrate). In this case, a 2D simulation is often perfectly adequate and dramatically faster. However, if we consider a narrow-channel device, where the width is comparable to the channel length or depletion depths, this assumption breaks down. Fringing electric fields from the sides become significant, and complex 3D corner effects alter the depletion charge. For such a device, or for an inherently three-dimensional architecture like a FinFET, a 2D simulation would give misleading results. A full 3D TCAD simulation becomes mandatory to capture the correct physics.

This choice underscores a final, crucial point: device simulation is not a replacement for thinking. It is a tool that extends our intuition, allowing us to see the invisible world inside these remarkable devices. It unifies the abstract with the concrete, the theoretical with the experimental, and the deterministic with the random. It is in this virtual forge that the science of semiconductors is translated into the technology that shapes our lives.