try ai
Popular Science
Edit
Share
Feedback
  • Algebraic Structures: A Guide to the Universal Language of Science and Computation

Algebraic Structures: A Guide to the Universal Language of Science and Computation

SciencePediaSciencePedia
Key Takeaways
  • Algebraic structures consist of a set of elements and a "rulebook" of axioms that define how to combine them.
  • Structures like semigroups, monoids, and groups are classified in a hierarchy based on which axioms (associativity, identity, inverse) they satisfy.
  • The concept of isomorphism reveals when two different-looking systems are structurally identical, providing a powerful unifying tool.
  • Abstract algebra provides the fundamental language for describing diverse phenomena, from computer logic and the geometry of space to the symmetries of modern physics.

Introduction

While the term "algebraic structures" might evoke images of abstract, esoteric mathematics, it describes a concept as fundamental as a rulebook for a game. These structures are the backbone of modern mathematics and science, providing a universal language to describe patterns. However, their study is often perceived as a mere exercise in classification, a dry collection of definitions and axioms. This view misses the profound beauty and utility of abstract algebra: its power to reveal the hidden architecture connecting seemingly unrelated parts of our universe, from computer programs to the curvature of spacetime.

This article pulls back the curtain on this powerful subject. The first chapter, "Principles and Mechanisms," will demystify the core concepts by starting with a simple game and building up the foundational axioms—closure, associativity, identity, and inverse. We will ascend the "ladder of structure" from basic magmas to the elegant symmetry of groups, exploring what happens when rules are broken or modified. Following this, the chapter "Applications and Interdisciplinary Connections" will showcase how these abstract rules manifest in the real world, serving as the essential framework for computer science, the geometry of space, and the fundamental laws of physics.

Principles and Mechanisms

Imagine you find a new board game. It has a set of pieces, but no rulebook. Is it a game? Not yet. It's just a collection of objects. An ​​algebraic structure​​ is the same sort of idea: it’s a set of "pieces" (which could be numbers, but could also be functions, vectors, or even something as simple as words) combined with a "rulebook" (a binary operation) that tells you how to combine any two of them. The true magic of mathematics lies not in studying the pieces themselves, but in understanding the consequences of the rules. By focusing on the rules—the abstract structure—we can uncover profound connections between seemingly unrelated parts of the universe.

The Game of Axioms

Let's invent a simple game. Our pieces will be all the possible finite strings of '0's and '1's, including a special "empty string," which we'll call ϵ\epsilonϵ. Our rule for combining two strings, say s1s_1s1​ and s2s_2s2​, is simply ​​concatenation​​: we just stick them together. If s1="10"s_1 = \text{"10"}s1​="10" and s2="011"s_2 = \text{"011"}s2​="011", then s1⋅s2="10011"s_1 \cdot s_2 = \text{"10011"}s1​⋅s2​="10011". Simple enough.

Now, let's play the game and ask some fundamental questions—the kinds of questions a mathematician asks. These questions are called ​​axioms​​.

  1. ​​Closure:​​ If we combine any two pieces from our set, do we always get another piece that's also in the set? In our string game, if we concatenate two finite binary strings, we get another finite binary string. So, yes. The set is ​​closed​​ under the operation.

  2. ​​Associativity:​​ Does the order of operations matter when we combine three or more pieces? Is (s1⋅s2)⋅s3(s_1 \cdot s_2) \cdot s_3(s1​⋅s2​)⋅s3​ the same as s1⋅(s2⋅s3)s_1 \cdot (s_2 \cdot s_3)s1​⋅(s2​⋅s3​)? For concatenation, it is. Appending "c" to "ab" gives "abc", which is the same as appending "bc" to "a". So, our operation is ​​associative​​.

  3. ​​Identity Element:​​ Is there a special piece in our set that, when combined with any other piece, does nothing? In our game, the empty string ϵ\epsilonϵ is this special piece. "101"⋅ϵ="101"\text{"101"} \cdot \epsilon = \text{"101"}"101"⋅ϵ="101" and ϵ⋅"101"="101"\epsilon \cdot \text{"101"} = \text{"101"}ϵ⋅"101"="101". This "do-nothing" element is called the ​​identity​​.

  4. ​​Inverse Element:​​ For any given piece, can we find a partner piece such that combining them gives us the identity element? For the string "10", is there another string sss such that "10"⋅s=ϵ\text{"10"} \cdot s = \epsilon"10"⋅s=ϵ? This is impossible. Concatenation only makes strings longer; it can never get you back to the empty string (unless you start with it). So, the ​​inverse​​ axiom fails.

By asking these four simple questions, we've just conducted a deep analysis of our structure. We've discovered it obeys the first three rules but not the fourth. This tells us its specific "type" in the grand catalogue of algebraic structures.

A Ladder of Structure

Mathematicians have created a hierarchy of structures based on which of these axioms they satisfy. Think of it as a ladder of increasing "niceness" or "completeness."

At the very bottom, any set with a closed binary operation is a ​​magma​​. It's the wild west; no other rules are guaranteed.

If a magma's operation is associative, it gets promoted to a ​​semigroup​​. Our string concatenation game lives here. Associativity is a powerful property, one we often take for granted. But we shouldn't! Consider the vector cross product from physics, which you use to find torques or magnetic forces. Let's take the set of all vectors in 3D space, R3\mathbb{R}^3R3, with the operation being the cross product, ×\times×. Is it associative? Let's check with the standard basis vectors i^,j^,k^\hat{i}, \hat{j}, \hat{k}i^,j^​,k^. (i^×i^)×j^=0⃗×j^=0⃗(\hat{i} \times \hat{i}) \times \hat{j} = \vec{0} \times \hat{j} = \vec{0}(i^×i^)×j^​=0×j^​=0 But... i^×(i^×j^)=i^×k^=−j^\hat{i} \times (\hat{i} \times \hat{j}) = \hat{i} \times \hat{k} = -\hat{j}i^×(i^×j^​)=i^×k^=−j^​ They are not the same! The cross product is famously ​​non-associative​​. This isn't a flaw; it's a feature. It tells us that this structure is fundamentally different from simple addition or multiplication. It belongs to a different family of structures, the ​​Lie algebras​​, which we will glimpse later.

Back on our ladder, a semigroup that has an identity element is called a ​​monoid​​. Our string game is a monoid, as are the non-negative integers under addition (identity is 0) or the integers under multiplication (identity is 1).

The top of this ladder is the most famous and arguably most important structure: the ​​group​​. A group is a monoid where every single element has an inverse. The integers with addition, (Z,+)(\mathbb{Z}, +)(Z,+), form a group. For any integer aaa, its inverse is −a-a−a, because a+(−a)=0a + (-a) = 0a+(−a)=0, the identity. Groups are beautiful because they guarantee you can always "undo" any operation. This completeness makes them the language of symmetry, from the patterns in a crystal to the fundamental laws of particle physics.

Even the simplest possible set can form a group. Consider a set with just one element, S={a}S = \{a\}S={a}. Let's define the only possible operation: a⋅a=aa \cdot a = aa⋅a=a. Is this a group?

  • ​​Closure:​​ a⋅a=aa \cdot a = aa⋅a=a, and aaa is in SSS. Yes.
  • ​​Associativity:​​ (a⋅a)⋅a=a⋅a=a(a \cdot a) \cdot a = a \cdot a = a(a⋅a)⋅a=a⋅a=a. And a⋅(a⋅a)=a⋅a=aa \cdot (a \cdot a) = a \cdot a = aa⋅(a⋅a)=a⋅a=a. Yes.
  • ​​Identity:​​ Does aaa act as an identity? a⋅a=aa \cdot a = aa⋅a=a. Yes.
  • ​​Inverse:​​ Is there an inverse for aaa? We need to find something that combines with aaa to give the identity, which is aaa. Well, a⋅a=aa \cdot a = aa⋅a=a, so aaa is its own inverse! All axioms hold. This is the ​​trivial group​​, the simplest group in the universe.

Life on the Fringes: Quasigroups and Cancellation

The group axioms are a package deal, but what happens if a structure satisfies a different collection of properties? In a group, the existence of inverses guarantees something very useful: the ​​cancellation laws​​. If a⋅x=a⋅ya \cdot x = a \cdot ya⋅x=a⋅y, you can multiply by a−1a^{-1}a−1 on the left to prove that x=yx = yx=y.

This property has a wonderful visual meaning. If we write out the multiplication table (a ​​Cayley table​​) for a finite structure, the cancellation law means that no element can appear more than once in any given row or column. Each row and column must be a permutation of the set's elements. Such a table is known as a ​​Latin Square​​.

Look at the tables for Structure I and Structure III in this problem. Every row is a perfect shuffle of the four elements {p,q,r,s}\{p, q, r, s\}{p,q,r,s}. They satisfy the cancellation law. In contrast, Structure II and IV have repeated elements in some rows, so for them, cancellation fails.

A structure whose Cayley table is a Latin Square is called a ​​quasigroup​​. It guarantees that equations like a⋅x=ba \cdot x = ba⋅x=b always have a unique solution for xxx. This is a useful property, but it doesn't make it a group. A quasigroup that also has an identity element is called a ​​loop​​.

Consider this structure:

∗wxyzwxwzyxyzwxyzyxwzwxyz\begin{array}{c|cccc} * & w & x & y & z \\ \hline w & x & w & z & y \\ x & y & z & w & x \\ y & z & y & x & w \\ z & w & x & y & z \\ \end{array}∗wxyz​wxyzw​xwzyx​yzwxy​zyxwz​​

You can check that its table is a Latin square, so it is a quasigroup. Now let's look for an identity. The row for zzz is (w,x,y,z)(w, x, y, z)(w,x,y,z), meaning z∗a=az * a = az∗a=a for all aaa. So zzz is a left identity. But look at the columns. Is there any element eee that gives back the column headers? No column is (w,x,y,z)T(w, x, y, z)^T(w,x,y,z)T. So there is no right identity. Since there's no two-sided identity, this structure is not a loop, and therefore certainly not a group. This illustrates the incredible subtlety involved. Properties like identity and cancellation can exist in partial or asymmetric forms.

Sometimes, a property can hold for almost all elements and fail for just one troublemaker. Take the rational numbers Q\mathbb{Q}Q with the operation a∗b=a+b−aba * b = a + b - aba∗b=a+b−ab. If we have a∗x=a∗ya * x = a * ya∗x=a∗y, does that imply x=yx=yx=y? Let's see: a+x−ax=a+y−ay  ⟹  x(1−a)=y(1−a)a + x - ax = a + y - ay \implies x(1-a) = y(1-a)a+x−ax=a+y−ay⟹x(1−a)=y(1−a) As long as a≠1a \neq 1a=1, we can divide by (1−a)(1-a)(1−a) and conclude that x=yx=yx=y. The cancellation law holds! But if a=1a=1a=1, the equation becomes x(0)=y(0)x(0)=y(0)x(0)=y(0), or 0=00=00=0. This is true for any xxx and yyy. So for a=1a=1a=1, the cancellation law catastrophically fails. One single element spoils a property for the entire set.

When One Rule Isn't Enough: The World of Rings

Our everyday arithmetic involves two operations: addition and multiplication. An algebraic structure that tries to capture this is called a ​​ring​​. A ring has a set and two operations, let's call them + and ·. For a structure to be a ring, it must satisfy a specific checklist:

  1. The set with the + operation must form an ​​abelian group​​ (an abelian group is a group where the operation is also commutative, i.e., a+b=b+aa+b=b+aa+b=b+a).
  2. The set with the · operation must form a ​​semigroup​​.
  3. The two operations must be linked by the ​​distributive laws​​: a⋅(b+c)=(a⋅b)+(a⋅c)a \cdot (b+c) = (a \cdot b) + (a \cdot c)a⋅(b+c)=(a⋅b)+(a⋅c).

Let's try to build a ring from something familiar: sets. Let XXX be a set with at least two elements. We'll use the power set P(X)\mathcal{P}(X)P(X) (the set of all subsets of XXX) as our elements. Let's define "addition" as set union (∪\cup∪) and "multiplication" as set intersection (∩\cap∩). Does (P(X),∪,∩)(\mathcal{P}(X), \cup, \cap)(P(X),∪,∩) form a ring?

Let's check the first requirement: is (P(X),∪)(\mathcal{P}(X), \cup)(P(X),∪) an abelian group?

  • It's closed and associative. Commutativity holds since A∪B=B∪AA \cup B = B \cup AA∪B=B∪A.
  • The additive identity is the empty set, ∅\emptyset∅, since A∪∅=AA \cup \emptyset = AA∪∅=A.
  • But what about additive inverses? For a subset AAA, we need an inverse −A-A−A such that A∪(−A)=∅A \cup (-A) = \emptysetA∪(−A)=∅. This is only possible if AAA itself is the empty set! If AAA is non-empty, you can't "union" it with anything to make it disappear. So, the structure fails to be an additive group at a fundamental level. It's not a ring!

Even though it failed, this example teaches us something else. Let's pretend for a moment it was a ring and check the other properties. The additive identity (the "zero") is ∅\emptyset∅. Multiplication is intersection. Does it have ​​zero-divisors​​? A zero-divisor is a non-zero element which, when multiplied by another non-zero element, gives zero. In our case, this means: can we find two non-empty sets AAA and BBB whose intersection is empty? Of course! If X={x,y,… }X=\{x, y, \dots\}X={x,y,…}, let A={x}A=\{x\}A={x} and B={y}B=\{y\}B={y}. Then A≠∅A \neq \emptysetA=∅ and B≠∅B \neq \emptysetB=∅, but A∩B=∅A \cap B = \emptysetA∩B=∅. So AAA and BBB are zero-divisors. This is a behavior you never see with ordinary numbers (if a⋅b=0a \cdot b = 0a⋅b=0, one of them must be 0), and it marks a major difference in the structure.

The Secret Unity: Isomorphism and Beyond

The true power of this abstract viewpoint is recognizing when two different-looking structures are, in fact, the same in disguise. This is the idea of ​​isomorphism​​.

Consider the integers Z\mathbb{Z}Z with a bizarre operation: x∗y=x+y−5x * y = x + y - 5x∗y=x+y−5. This seems alien. Let's find its identity element eee: x∗e=x  ⟹  x+e−5=x  ⟹  e=5x * e = x \implies x + e - 5 = x \implies e = 5x∗e=x⟹x+e−5=x⟹e=5. This is already strange. But watch what happens if we "re-center" our world around this new identity. Let's define a new set of coordinates, x′=x−5x' = x-5x′=x−5. Then our operation looks like this: x∗y=(x′+5)∗(y′+5)=(x′+5)+(y′+5)−5=x′+y′+5x * y = (x' + 5) * (y' + 5) = (x' + 5) + (y' + 5) - 5 = x' + y' + 5x∗y=(x′+5)∗(y′+5)=(x′+5)+(y′+5)−5=x′+y′+5 The result in the new coordinates is (x∗y)′=(x∗y)−5=x′+y′(x*y)' = (x*y)-5 = x' + y'(x∗y)′=(x∗y)−5=x′+y′. It's just regular addition! The complicated-looking operation ∗*∗ on the set Z\mathbb{Z}Z is structurally identical—isomorphic—to the simple operation +++ on the same set. It was just wearing a mask. By understanding the abstract structure, we see that we are not dealing with a new beast, but a familiar friend in a funny hat.

This unifying power is the heart of the subject. It allows us to solve a problem in one domain by translating it to another, more convenient one. And it reveals that structures can belong to entirely different families. Let's return to the cross product. We saw it wasn't associative. But it isn't lawless. It obeys a different, more complex rule called the ​​Jacobi identity​​: a⃗×(b⃗×c⃗)+b⃗×(c⃗×a⃗)+c⃗×(a⃗×b⃗)=0⃗\vec{a} \times (\vec{b} \times \vec{c}) + \vec{b} \times (\vec{c} \times \vec{a}) + \vec{c} \times (\vec{a} \times \vec{b}) = \vec{0}a×(b×c)+b×(c×a)+c×(a×b)=0 This property, along with anti-commutativity (a⃗×b⃗=−b⃗×a⃗\vec{a} \times \vec{b} = - \vec{b} \times \vec{a}a×b=−b×a), makes (R3,×)(\mathbb{R}^3, \times)(R3,×) a ​​Lie algebra​​. These are the fundamental algebraic structures that describe continuous symmetries, such as rotations, and they are at the very heart of modern physics.

Even on a simple 2D plane, we can define different Lie algebra structures. We could define a "boring" abelian Lie algebra where the bracket of any two vectors is just zero: [x,y]1=0[x, y]_1 = 0[x,y]1​=0. Here, nothing interacts. Or, we could define a non-abelian structure where the basis vectors have a non-trivial relationship, like [e1,e2]2=e1[e_1, e_2]_2 = e_1[e1​,e2​]2​=e1​. These two structures are built on the same set of "pieces" (R2\mathbb{R}^2R2), but the different "rules" make them fundamentally distinct worlds. The first corresponds to a world with two independent, non-interacting symmetries, while the second describes one where symmetries can combine to produce another.

From simple games with strings to the symmetries of the universe, the principles are the same. By defining a set and a set of rules, an entire world of logical consequences unfolds. The beauty of abstract algebra is in finding the common rulebook that governs them all.

Applications and Interdisciplinary Connections

You might think that the study of algebraic structures—groups, rings, fields, and the like—is a rather abstract game, a bit of mathematical housekeeping to keep things tidy. We’ve just spent a chapter carefully laying out the rules, the axioms, the definitions. But to leave it at that would be a terrible mistake. It would be like learning the rules of grammar without ever reading a poem, or studying the theory of harmony without ever hearing a symphony. The profound beauty of these structures lies not in their definitions, but in their surprising and ubiquitous appearance across the entire scientific landscape. They are the unseen architecture of reality, the fundamental patterns that nature, a computer program, or even the shape of space itself, seems to love to use.

Let's begin our journey in a world that is, in a sense, purely built from logic and rules: the world of computer science. Consider the simple idea of a set of items, say SSS. Now think about all the possible subsets you can form, the so-called power set P(S)\mathcal{P}(S)P(S). It seems like just a collection. But what if we define an operation on these subsets? A wonderfully useful one is the "symmetric difference," written as AΔBA \Delta BAΔB. It’s the set of all things that are in AAA or in BBB, but not in both. If you've ever dealt with logic gates, you'll recognize this instantly: it's the exclusive OR (XOR) operation. A bit is in the result if the corresponding bits in the inputs are different. Now, let’s ask the question: what is the structure of (P(S),Δ)(\mathcal{P}(S), \Delta)(P(S),Δ)?

Amazingly, it forms a perfect Abelian group!. The identity element is the empty set, ∅\emptyset∅, because taking the symmetric difference of any set AAA with nothing leaves you with AAA. And the inverse? Here's the kicker: every set is its own inverse! AΔAA \Delta AAΔA is always the empty set. This simple group structure, born from elementary set theory, is the algebraic backbone of everything from error-correcting codes to cryptography. For instance, the structure of many linear codes is built upon vector spaces over the field of two elements, Z2\mathbb{Z}_2Z2​, whose addition is precisely this XOR operation. Look at what happened: we started with a simple logical idea and discovered a rich algebraic world humming just beneath the surface.

This theme of finding hidden structure repeats itself. Take permutations. We can think of them as just shuffling things. Let's consider a specific set of shuffles: swapping element 1 with 2, 3 with 4, and so on, up to 2n−12n-12n−1 with 2n2n2n. Each swap is a simple operation, a transposition. What happens if we look at the group generated by all these swaps? Since these swaps act on completely different elements, they don't interfere with each other; in algebraic terms, they commute. The structure they generate is not some fiendishly complex permutation group, but the clean, elegant direct product of nnn copies of the simplest non-trivial group, S2S_2S2​ (or Z2\mathbb{Z}_2Z2​). This is the algebraic essence of independent binary choices, and this very structure, (Z2)n(\mathbb{Z}_2)^n(Z2​)n, appears again as the foundation for digital systems and information theory.

But sometimes, the hidden structure is anything but simple. Consider a seemingly trivial computational task: determining if a number, written in binary, is divisible by 3. You could write a simple program for this. A computer scientist might design a finite-state automaton, a little machine that reads the binary string one digit at a time and keeps track of the running remainder modulo 3. It's a beautiful, efficient little machine. But we, as students of algebra, can ask a deeper question: what is the algebraic soul of this computation? The set of all transformations that the input symbols perform on the machine's states forms a structure called a syntactic monoid. For this simple problem of divisibility by 3, this structure is not a simple cyclic group or something you might guess from the number 3. It is, against all intuition, the full symmetric group on three elements, S3S_3S3​—the non-abelian group of all permutations of three objects!. Just think about that. A basic arithmetic property, when viewed through an algebraic lens, is revealed to be governed by the same structure that describes the symmetries of an equilateral triangle. This is the power of algebra: it finds unexpected and profound unities between seemingly disparate worlds.

From the discrete world of computation, let’s take a leap into the continuous world of shape and space. Can algebra tell us what a donut "feels" like? The field of algebraic topology answers with a resounding "yes!" Consider two surfaces: a torus (the surface of a donut) and a Klein bottle. Both can be made by gluing the edges of a square of paper. For the torus, you glue opposite sides in the same direction. For the Klein bottle, you give one pair of sides a twist before gluing. They seem similar. But if you were an ant walking on their surfaces, you'd notice a difference. On a torus, if you walk one lap around the short way and then one lap around the long way, you end up in the same place as if you'd done it in the opposite order. Your paths commute. On the Klein bottle, because of that twist, the order matters. A trip "around" and then "through" is not the same as a trip "through" and then "around"—you end up facing a different direction.

This physical experience is captured perfectly by an algebraic object called the fundamental group. It turns out the fundamental group of the torus is Abelian—in fact, it's Z×Z\mathbb{Z} \times \mathbb{Z}Z×Z, reflecting the two independent, commuting directions you can travel. But the fundamental group of the Klein bottle is non-abelian. The commutation relation fails. An algebraic equation (ab=baab = baab=ba versus aba−1=b−1aba^{-1} = b^{-1}aba−1=b−1) becomes the litmus test for the global nature of a geometric space. The abstract algebraic structure is the shape, in its most essential form.

This marriage of algebra and geometry finds its ultimate expression in modern physics. In the grand theories of nature, from quantum mechanics to general relativity, symmetry is king. And symmetry is the language of group theory. When physicists build a theory, they often start with a vast, complicated space of possibilities. Then, they impose symmetries—demands that the laws of physics should look the same from different perspectives. These symmetries, which form a group, act on the space of possibilities and perform a magical act of simplification. They select the physically relevant structures. In a beautiful example from representation theory, one can start with an infinitely complicated object known as a tensor algebra and see how the action of a simple symmetry group, like the two-element group C2C_2C2​, carves out a much simpler, familiar structure: a basic polynomial algebra, C[x]\mathbb{C}[x]C[x]. This process of finding the "invariants" under a group action is a central tool for physicists taming the complexity of quantum fields. It’s also the deep idea behind the Gelfand-Naimark theorem in functional analysis, which reveals that sometimes a complicated algebra of operators—like those in quantum mechanics—is secretly isomorphic to a much simpler algebra of functions on some underlying space.

Perhaps the most breathtaking union of all is seen in Einstein's theory of general relativity. In the vacuum of space, far from any matter, spacetime is not necessarily flat. It can be curved by the presence of a gravitational field, like the one around a rotating black hole. This curvature is described by a mathematical object called the Weyl tensor. The Weyl tensor has its own internal algebraic structure, which classifies spacetimes into different "Petrov types." This classification is not just a labeling scheme. The Goldberg-Sachs theorem provides the dictionary that translates this algebra into geometry. It states that for a vacuum spacetime, the algebraic type of the Weyl tensor is directly tied to the geometric properties of light rays passing through it. The Kerr solution, which describes a rotating black hole, is of Petrov type D. The Goldberg-Sachs theorem demands that this algebraic property has a stunning physical consequence: there exist two special families of light rays (the ingoing and outgoing principal null congruences) that travel through this curved spacetime without being sheared or distorted by tidal forces. The algebra at a single point in spacetime issues a command that governs the geometry of light across the cosmos.

Even in engineering, where we build things, algebraic structures provide the blueprint for understanding complexity. Linear systems are simple because they obey the principle of superposition: the response to a sum of inputs is the sum of the responses. But what happens when things are not quite linear? For a weakly nonlinear system, modeled by a Volterra series, this simple addition breaks down. The output contains all sorts of cross-terms. It's a mess. Or is it? It turns out this mess is perfectly organized. The organizing principle is no longer simple addition but the structure of a graded symmetric algebra. The theory tells us that to understand the nonlinear response, we must lift our inputs into this richer algebraic world where the cross-products live. When linearity fails, a more sophisticated algebra is waiting to take its place.

From the logic gates of a computer, to the shape of the universe, to the physics of a black hole, the same fundamental patterns—the same algebraic structures—appear again and again. They are the universal language of science, the invisible girders and arches that support the entire edifice of our understanding. To learn them is to learn the secret alphabet in which the book of nature is written.