try ai
Popular Science
Edit
Share
Feedback
  • Non-differentiable Function

Non-differentiable Function

SciencePediaSciencePedia
Key Takeaways
  • Non-differentiable functions possess sharp "kinks" where a unique derivative fails to exist, a concept exemplified by the absolute value function.
  • The subgradient generalizes the derivative for non-differentiable functions, providing a critical tool for optimization in modern machine learning.
  • Far from being rare, continuous but nowhere-differentiable functions are abundant and model real-world phenomena in physics, engineering, and economics.
  • The arithmetic of non-differentiable functions reveals that while addition preserves kinks, multiplication can sometimes "heal" them to create smooth functions.

Introduction

In the landscape of mathematics, we often prioritize functions that are smooth and continuous, whose behavior can be neatly described by the tools of elementary calculus. However, this focus on well-behaved curves overlooks a vast and fascinating world: the realm of non-differentiable functions. These functions, characterized by sharp "corners," "kinks," or erratic oscillations, are often dismissed as mere mathematical curiosities or pathological cases. This article challenges that perception, revealing them as fundamental structures that provide a more accurate language for describing complex phenomena in science and technology.

This exploration is divided into two parts. First, in "Principles and Mechanisms," we will delve into the fundamental nature of non-differentiability, starting with simple examples like the absolute value function and progressing to the counterintuitive existence of functions that are continuous everywhere but differentiable nowhere. We will uncover the rules that govern their behavior and understand their place in the broader universe of functions. Subsequently, in "Applications and Interdisciplinary Connections," we will journey out of pure mathematics to witness how these "kinks" and "jumps" become powerful tools in fields as diverse as machine learning, physics, and economics, enabling breakthroughs in optimization, material science, and models of human behavior. Prepare to discover that the sharp edges of mathematics are not flaws, but features essential for understanding our world.

Principles and Mechanisms

In our introduction, we caught a glimpse of functions that defy the smooth, predictable world of elementary calculus. We spoke of functions that are continuous—you can draw them without lifting your pen—but possess sharp "corners" or "kinks" where the notion of a unique tangent line breaks down. These are the non-differentiable functions. But what are they, really? How do they behave? Are they rare curiosities or a fundamental feature of the mathematical landscape? Let's embark on a journey, in the spirit of a physicist exploring a new phenomenon, to understand the principles and mechanisms that govern this fascinating world.

The Gentle Slope and the Sharp Corner

The derivative, at its heart, is a simple idea: it's the slope of a curve at a single point. For a function like f(x)=x2f(x) = x^2f(x)=x2, the slope changes gracefully from point to point. We can zoom in anywhere on its graph, and it will look more and more like a straight line. This "local flatness" is the essence of differentiability.

The simplest function that challenges this idea is the absolute value function, f(x)=∣x∣f(x) = |x|f(x)=∣x∣. Its graph is a perfect 'V' shape, with a sharp point at x=0x=0x=0. To the right of zero, the slope is a constant +1+1+1. To the left, it's a constant −1-1−1. But right at the origin, what is the slope? There is a conflict. The limit from the left tells us the slope should be −1-1−1, while the limit from the right insists it is +1+1+1. Since they don't agree, the derivative simply does not exist at that point. There is no unique tangent line.

This single sharp point is the seed of all non-differentiability. We can build more complex functions that have kinks at various locations. Consider a function built from multiple absolute value terms, such as f(x)=2∣x−3∣−5∣x+4∣f(x) = 2|x - 3| - 5|x + 4|f(x)=2∣x−3∣−5∣x+4∣. This function is continuous everywhere, but it inherits the "kinkiness" from its components. The potential trouble spots are at x=3x=3x=3 and x=−4x=-4x=−4, the points where the expressions inside the absolute values become zero. By analyzing the function piece by piece, we find that at x=−4x=-4x=−4, the slope of the graph abruptly changes from 333 to −7-7−7. At x=3x=3x=3, it jumps from −7-7−7 to −3-3−3. These abrupt changes, these irreconcilable differences between the slope just to the left and the slope just to the right, are precisely why the function is not differentiable at those two points.

The Arithmetic of Kinks

What happens when we combine smooth, well-behaved functions with these kinky ones? Does the smoothness heal the kink, or does the kink spoil the smoothness? The answer depends on how we combine them.

Let's take a non-differentiable function, like g(x)=∣x∣g(x) = |x|g(x)=∣x∣, and add a perfectly smooth, infinitely differentiable function to it, say f(x)=cos⁡(x)f(x) = \cos(x)f(x)=cos(x). The new function is FA(x)=cos⁡(x)+∣x∣F_A(x) = \cos(x) + |x|FA​(x)=cos(x)+∣x∣. What happens at the troublesome point x=0x=0x=0? The smooth function cos⁡(x)\cos(x)cos(x) has a well-defined slope of 000 at x=0x=0x=0. But this does nothing to resolve the conflict within ∣x∣|x|∣x∣. The kink persists. Adding a smooth ramp to a sharp step doesn't remove the corner of the step. The rule is simple and robust: the sum of a differentiable function and a non-differentiable function is always non-differentiable. If it were differentiable, we could subtract the differentiable part from it and be left with a differentiable function, which is a contradiction!

Multiplication, however, is a different story. It can sometimes perform a magical act of healing. Consider the function h(x)=x∣x∣h(x) = x|x|h(x)=x∣x∣. Here, we are multiplying the non-differentiable function ∣x∣|x|∣x∣ by the simple differentiable function xxx. At x=0x=0x=0, something wonderful happens. For x>0x > 0x>0, h(x)=x2h(x) = x^2h(x)=x2. For x0x 0x0, h(x)=−x2h(x) = -x^2h(x)=−x2. The graph now looks like two parabolas seamlessly joined at the origin. The kink has vanished! The multiplication by xxx "squashes" the function near the origin just enough to smooth out the corner. The derivative at x=0x=0x=0 is now a perfectly well-defined 000.

This tells us something deep. Non-differentiability isn't just an on/off switch. It's about how a function behaves near a point. The function ∣x∣|x|∣x∣ approaches its value at the origin linearly, causing the slopes to clash. The function x∣x∣x|x|x∣x∣ approaches the origin quadratically, which is flat enough to guarantee a single, unique horizontal tangent. In some cases, multiplication can even turn two non-differentiable functions into a smooth one. For instance, multiplying g(x)=∣x∣g(x)=|x|g(x)=∣x∣ by h(x)=x∣x∣h(x)=x|x|h(x)=x∣x∣ gives FD(x)=x(∣x∣)2F_D(x) = x(|x|)^2FD​(x)=x(∣x∣)2. Since (∣x∣)2(|x|)^2(∣x∣)2 is just x2x^2x2, this simplifies to the polynomial x3x^3x3, which is differentiable everywhere. The algebra of kinks is full of surprises!

Creases, Peaks, and Higher Dimensions

Extending these ideas to functions of two or more variables, say f(x,y)f(x, y)f(x,y), takes us from curves to surfaces. For a surface to be differentiable at a point, it must be "locally flat"—it must have a well-defined tangent plane. The analog of a "kink" can now be a "crease" (like on a folded piece of paper) or a sharp "peak" (like the tip of a cone).

We can probe the surface's smoothness in specific directions. This is called the ​​directional derivative​​. It tells us the slope of a particular slice of the surface. One might naively think that if all the directional derivatives exist, the function must be differentiable. But this is not true!

Imagine a hypothetical scenario: at a point P0P_0P0​, the directional derivative in a direction u⃗\vec{u}u is 555, and in the opposite direction, −u⃗-\vec{u}−u, it is −5-5−5. Does this imply differentiability at P0P_0P0​? It turns out this relationship, D−u⃗f(P0)=−Du⃗f(P0)D_{-\vec{u}}f(P_0) = -D_{\vec{u}}f(P_0)D−u​f(P0​)=−Du​f(P0​), is a general property that holds whenever these directional derivatives exist, and it tells us nothing about overall differentiability. To see why, consider the function f(x,y)=∣x∣+5yf(x, y) = |x| + 5yf(x,y)=∣x∣+5y. Its graph is like a sheet of paper folded along the yyy-axis, forming a "crease." Let's look at the origin, (0,0)(0,0)(0,0). If we slice the surface along the yyy-axis (the direction u⃗=(0,1)\vec{u}=(0,1)u=(0,1)), the slice is just the line 5y5y5y, which has a slope of 555. So, the directional derivative is 555. In the opposite direction, it's −5-5−5. The condition is met. However, if we try to slice it along the xxx-axis, we run into the familiar kink of ∣x∣|x|∣x∣. The surface is not "locally flat" at the origin; it has a sharp crease. The different directional slices don't assemble into a single, coherent tangent plane. Thus, a function can have perfectly well-behaved directional derivatives in some directions while failing to be differentiable as a whole. Differentiability in higher dimensions is a much stricter requirement than just the smoothness of its one-dimensional slices.

A Law of Order: The Differentiability of Monotone Functions

So far, our examples have had non-differentiable points that are few and far between. But can a function be kinky at infinitely many points? Can it be kinky everywhere? Before we venture into that wilderness, let's consider a class of remarkably well-behaved functions: ​​monotone functions​​. These are functions that are always non-decreasing or always non-increasing.

This category is not just an abstract curiosity; it's fundamental to the real world. For instance, in probability and reliability engineering, the ​​Cumulative Distribution Function (CDF)​​ of a component's lifetime, FX(x)F_X(x)FX​(x), gives the probability of failure by time xxx. By its very nature, this probability can only stay the same or increase as time goes on, so any CDF is a non-decreasing function.

For these functions, the great mathematician Henri Lebesgue discovered a profound law of order. ​​Lebesgue's Differentiation Theorem​​ states that any monotone function on an interval is differentiable almost everywhere. "Almost everywhere" has a precise meaning: the set of points where the derivative fails to exist has a total "length" (or Lebesgue measure) of zero. This is a stunning result! It means that even if a monotone function has jumps or an infinite number of kinks, those points are so sparsely scattered that they are, in a sense, negligible.

How wild can this set of non-differentiable points be? The set of rational numbers Q\mathbb{Q}Q in [0,1][0,1][0,1] is infinite and dense (between any two numbers there is a rational one), yet it has a Lebesgue measure of zero. Could we construct a monotone function that is non-differentiable at every rational number? The answer is a resounding yes. One can build such a function using a sum of step functions, one for each rational number. Even more astonishingly, it is possible to construct a function that is strictly increasing and continuous everywhere on [0,1][0,1][0,1], yet its derivative fails to exist at precisely the rational numbers and nowhere else. This is a delicate and beautiful construction, showing that a function can be "jerky" on a dense set of points while remaining perfectly continuous and always rising.

Into the Abyss: Continuous but Nowhere Differentiable Functions

Lebesgue's theorem gives us a safety net for monotone functions. But what happens if we remove that constraint? What if a function is allowed to oscillate up and down wildly? This question led 19th-century mathematicians to a shocking discovery: the existence of functions that are continuous everywhere but differentiable nowhere.

These are not just functions with many kinks. They are pathological monsters that wiggle so violently, on every possible scale, that the very idea of a tangent line becomes meaningless at every single point. The classic analogy is a coastline. From a distance, it looks smooth. Zoom in, and you see bays and peninsulas. Zoom in on a bay, and you see smaller coves and headlands. This self-similar roughness continues indefinitely. A nowhere differentiable function has this "infinite-zoom" jaggedness at every point on its graph.

How could such a thing even arise? A crucial insight comes from observing sequences of functions. We can start with a sequence of perfectly smooth, differentiable functions, like fn(x)=x2+1/n2f_n(x) = \sqrt{x^2 + 1/n^2}fn​(x)=x2+1/n2​. As nnn gets larger, these functions get closer and closer to f(x)=∣x∣f(x) = |x|f(x)=∣x∣. The smooth curves converge to a function with a kink. This opens the door: if a simple limiting process can create one kink, perhaps a more sophisticated one—like summing up infinitely many waves with increasing frequencies and decreasing amplitudes—could create a kink at every point. This is precisely how Karl Weierstrass constructed the first famous example in 1872, to the astonishment of the mathematical community.

These functions have almost supernatural properties. Suppose you have a nowhere differentiable function, W(x)W(x)W(x). What if you try to "heal" it by adding a perfectly smooth function, say g(x)=sin⁡(x)g(x) = \sin(x)g(x)=sin(x)? Does this smooth out even one single point? The answer is a definitive no. The sum F(x)=W(x)+g(x)F(x) = W(x) + g(x)F(x)=W(x)+g(x) remains nowhere differentiable. The property of being nowhere differentiable is like a genetic disease with 100% penetrance; it's so pathologically ingrained at every point that no amount of smooth perturbation can cure it.

A Census of the Functional Universe

We have met the well-behaved differentiable functions of calculus and the monstrous nowhere-differentiable functions of advanced analysis. Which type is more "common"? Are the monsters rare oddities, or are the smooth functions the special ones? The answer, provided by a powerful result called the Baire Category Theorem, is one of the most profound surprises in all of mathematics.

Imagine the space of all continuous functions on an interval, say C([0,1])C([0,1])C([0,1]). We can define a notion of "distance" between two functions, so we can talk about functions being "close" to one another. In this vast universe of functions, it turns out that both the set of functions that are differentiable at least somewhere and the set of functions that are differentiable nowhere are ​​dense​​.

"Dense" here means what it sounds like. Think of the rational and irrational numbers on the real number line. No matter how small an interval you look at, you will find both types of numbers. The same is true in the universe of functions. Take any continuous function you can imagine—a straight line, a parabola, a crazy squiggle. You can find another function arbitrarily "close" to it that is a well-behaved polynomial (differentiable everywhere), AND you can find another function arbitrarily "close" to it that is a nowhere-differentiable monster.

The conclusion is staggering. Far from being rare, the nowhere differentiable functions are everywhere. In a very precise sense, a "typical" continuous function is nowhere differentiable. The smooth, predictable functions that form the bedrock of calculus and physics are the true exceptions. They are like perfect crystals in a world that is, by its nature, rugged and chaotic.

A Final Act of Mathematical Magic

Our journey has shown that non-differentiability is a deep, complex, and surprisingly common phenomenon. The rules that govern it often defy our initial intuition. Let's end with one last puzzle that turns our expectations upside down.

We've seen that combining functions can destroy differentiability. Can the process of composition also create it? Is it possible to find a function f(x)f(x)f(x) that is itself not differentiable, but when you compose it with itself—forming g(x)=f(f(x))g(x) = f(f(x))g(x)=f(f(x))—the result is a perfectly smooth, continuously differentiable function?

It seems impossible. The chain rule, (f∘f)′(x)=f′(f(x))⋅f′(x)(f \circ f)'(x) = f'(f(x)) \cdot f'(x)(f∘f)′(x)=f′(f(x))⋅f′(x), suggests that the derivatives of f(x)f(x)f(x) are required. But what if f′(x)f'(x)f′(x) doesn't exist? The trick is to be clever. We can construct a function fff that has kinks, but is designed such that its range (the output values) falls entirely within a region where the function itself is flat. For example, we can build a continuous function with an infinite number of non-differentiable "corners" whose output values are always small positive numbers. Then, we can define the function to be zero for all those small positive inputs. The result? The first application of fff produces an output y=f(x)y=f(x)y=f(x). The second application operates on this yyy, and since yyy is in the "flat" zero region, f(y)f(y)f(y) is always zero. The composed function f(f(x))f(f(x))f(f(x)) is simply the constant function 000, which is as smooth as can be! And this trick can be adapted to produce non-constant smooth functions as well.

This is more than just a clever trick. It is a testament to the creative and constructive spirit of mathematics. It reminds us that even when dealing with "pathological" objects, there is an underlying logic and beauty, and the rules of the game allow for outcomes that are as elegant as they are unexpected. The world of non-differentiable functions is not just a collection of problems; it is a rich territory for exploration, wonder, and discovery.

Applications and Interdisciplinary Connections

For a long time in our scientific education, we are taught to cherish the smooth and well-behaved. We learn to love functions we can differentiate again and again, functions that trace elegant, flowing curves without any unseemly breaks or sharp corners. We are told, implicitly, that the universe at its most fundamental is smooth. But what if this is only half the story? What if I told you that some of the most profound and revolutionary ideas in modern science and engineering are built not on smoothness, but on the very "misbehaved" functions your calculus teacher warned you about? The kink, the corner, and the jump are not pathologies to be avoided; they are powerful tools that describe the world as it often is: decisive, constrained, and full of sharp transitions.

Let's embark on a journey to see where these sharp edges appear and why they are so incredibly useful. We will see that from the logic of a computer algorithm to the physics of a fracturing rock, non-differentiability is a secret language that nature, and we, use to solve some of the hardest problems.

The Calculus of Kinks: Optimization and Machine Learning

Imagine you are programming a robot to navigate a room, but it must stay within a certain boundary. A classical approach might be to create a smooth "force field" that gently pushes the robot away from the wall. The closer it gets, the harder the push. This is the idea behind a smooth penalty, like adding a term c[g(x)]2c[g(x)]^2c[g(x)]2 to your cost function, where g(x)=0g(x)=0g(x)=0 represents the wall. This works, but it has a strange side effect: to perfectly enforce the constraint, the penalty strength ccc has to become infinitely large. The robot never quite learns to stay off the wall; it just learns that getting very close is very "expensive."

Now, what if we used a sharp penalty instead? Consider the absolute value function, ∣g(x)∣|g(x)|∣g(x)∣. This function has a sharp kink at zero. Using this as a penalty, c∣g(x)∣c|g(x)|c∣g(x)∣, creates a fundamentally different landscape. The point of non-differentiability at the wall acts like a "hard" barrier in the optimization. It turns out that for a large enough (but finite!) value of ccc, the minimum of this new, non-differentiable problem is exactly the solution to the original constrained problem. The kink isn't a bug; it's a feature that allows us to enforce constraints with perfect precision.

This success, however, comes at a price. Our standard optimization tool, gradient descent, relies on following the direction of the steepest slope. But what is the slope at a sharp corner? A naive algorithm, trying to compute a gradient at the kink, can get hopelessly confused. It might get stuck, thinking the slope is zero when it isn't, or it might oscillate wildly as it steps back and forth across the corner. We need a new tool.

That tool is the ​​subgradient​​. Think of a smooth, convex (bowl-shaped) function. At any point, you can draw a unique tangent line that sits entirely below the function. The slope of that line is the derivative. Now, imagine a function with a kink, like the absolute value function f(x)=∣x∣f(x)=|x|f(x)=∣x∣ at x=0x=0x=0. At the bottom of this "V" shape, you can't draw a unique tangent line. But you can draw a whole fan of lines that pass through the point (0,0)(0,0)(0,0) and stay below the graph. Their slopes could be anything from −1-1−1 to 111. This set of all possible "supporting" slopes, in this case the interval [−1,1][-1, 1][−1,1], is the subdifferential, and any single slope within it is a subgradient.

The rule for finding a minimum is then beautifully generalized: a point is a minimum if and only if the number zero is contained within its subgradient set. For f(x)=∣x+2∣f(x)=|x+2|f(x)=∣x+2∣, the subgradient at the minimum point x=−2x=-2x=−2 is the set [−1,1][-1, 1][−1,1], which happily contains zero. Armed with this concept, we can design "subgradient descent" algorithms that navigate these kinked landscapes.

And where is this idea most triumphantly applied? In the world of data, AI, and machine learning.

  • ​​Sparsity and the L1-Norm:​​ How do you find the most important factors in a complex dataset with thousands of variables? You might want a model that is "sparse"—one that sets the coefficients of most irrelevant variables to exactly zero. The key to this is the L1-norm, ∥x∥1=∑i∣xi∣\| \mathbf{x} \|_1 = \sum_i |x_i|∥x∥1​=∑i​∣xi​∣, which is just a sum of absolute values. Minimizing a loss function plus a penalty on the L1-norm (a technique called LASSO) magically encourages sparse solutions. The non-differentiable corners of the L1-norm function are precisely what pull coefficients all the way to zero. We can solve these problems using subgradient descent, where at each step, we pick a valid subgradient to guide our search for the simplest, most predictive model.

  • ​​The Brain of AI: ReLU:​​ The engine of the deep learning revolution is the neural network, and the unsung hero within it is a function called the Rectified Linear Unit, or ReLU: f(x)=max⁡(0,x)f(x) = \max(0, x)f(x)=max(0,x). This incredibly simple function—flat for negative inputs, and a straight line for positive inputs—has a single kink at x=0x=0x=0. Modern neural networks are built by composing millions of these non-differentiable units. How do we train them? The automatic differentiation (AD) engines that power frameworks like PyTorch and TensorFlow have been taught the rules of subgradients. When they encounter a ReLU unit at exactly zero during training, they don't panic; they simply use a pre-defined subgradient value (like 12\frac{1}{2}21​ or 000) to continue the computation. The entire edifice of modern AI rests on a consistent and practical application of our "calculus of kinks."

  • ​​Designing Learning:​​ We can even build non-differentiability directly into our machine learning objectives to reflect specific goals. Suppose we are building a medical diagnostic tool. A "false negative" (missing a disease) might be far more costly than a "false positive." We can design a custom, non-differentiable loss function with different slopes to penalize these two types of errors asymmetrically. The kinks in the loss function represent our explicit, value-laden choices about what kinds of mistakes the machine is allowed to make.

Sharp Corners in the Physical World

The utility of non-differentiability is not confined to the abstract world of algorithms and data. It is written into the very laws of the physical world.

Consider the science of materials. When does a solid, like a piece of rock or a volume of soil, fail under stress? A simple model might suggest that it fails when some smooth combination of pressures and shears exceeds a threshold. But for many materials, the reality is more complex and more interesting. The ​​Mohr-Coulomb yield criterion​​, a cornerstone of geomechanics and civil engineering, describes the failure surface as a hexagon in the plane of deviatoric stresses. This surface has six sharp corners. These are not mathematical artifacts; they are physically meaningful. They represent the transition points between different modes of failure, for instance, from a state of triaxial compression to one of triaxial extension. The plastic flow of the material at one of these corners is not uniquely defined—the material has multiple "choices" for how to deform. The non-differentiability of the yield surface is a direct mathematical consequence of the frictional, piecewise nature of material failure.

Let's look at another kind of "break": a discontinuity or a jump. The ​​Heaviside step function​​, H(x)H(x)H(x), which is zero for negative xxx and one for positive xxx, is the perfect model for an event that switches on at time zero. But what is its derivative? Classically, it's undefined. The function isn't even continuous. Yet physicists and engineers need to describe the derivative of a step—an impulse, a point force, a shock. The theory of ​​generalized functions​​ or ​​distributions​​ provides a breathtakingly elegant solution. It redefines the derivative in a "weak" sense, by telling us how it acts on other, infinitely smooth "test functions." Using this framework, the derivative of the Heaviside function is found to be the ​​Dirac delta function​​, δ(x)\delta(x)δ(x), an infinitely high, infinitesimally narrow spike at the origin whose integral is one. This framework allows us to apply the powerful machinery of calculus, like the product rule and integration by parts, to functions with jumps and spikes, providing the mathematical language for everything from signal processing to quantum field theory.

Kinks in Human Behavior: A Glimpse into Economics

Finally, we turn to a domain where smoothness is perhaps the most unnatural assumption of all: the study of human behavior. Classical economic models often assume that people have smooth utility functions, meaning our satisfaction changes gracefully with changes in wealth or consumption. But behavioral economics tells a different story. We are creatures of reference points. The pain of losing 100feelsmuchstrongerthanthepleasureofgaining100 feels much stronger than the pleasure of gaining 100feelsmuchstrongerthanthepleasureofgaining100. This "loss aversion" creates a kink in our utility function at our current state of wealth.

Modeling this more realistic, kinked utility presents a challenge to economists. Many of the standard numerical methods for solving dynamic economic models, such as "shooting algorithms" that rely on Newton's method, break down precisely because they require smooth derivatives. When an agent's consumption path in a simulation crosses the reference point, the algorithm can fail spectacularly. This forces economists to adopt more robust numerical tools—like the simple bisection method, which only requires continuity—or to develop sophisticated techniques like "smoothing the kink" where the non-differentiable point is locally approximated by a smooth curve. The presence of non-differentiability in our models of human choice is a powerful reminder that our mathematics must be rich enough to capture the psychological realities of decision-making.

A World of Beautiful, Sharp Edges

Our journey is complete. We began by questioning the supremacy of smoothness and found a universe of applications for its opposite. We saw how the sharp corner of the absolute value function allows for exact optimization. We learned to navigate these corners with the subgradient, a tool that powers modern machine learning, from creating sparse models to training deep neural networks. We saw these same sharp corners appear in the physical laws governing how materials break, and we saw how jumps and impulses can be tamed with the language of distributions. Finally, we saw a reflection of these kinks in our own economic behavior.

The world is not always smooth. It has phase transitions, decision boundaries, critical thresholds, and instantaneous events. By embracing the mathematics of non-differentiability, we do not abandon calculus; we enrich it, creating a more powerful and more truthful language to describe the beautiful, sharp-edged reality we inhabit.