Exterior algebra

Geometric algebra was not born in an ivory tower; it grew out of three very human lives, each marked by faith, doubt, and an almost mystical hunger for structure.

The quiet mystic: Grassmann
Hermann Grassmann was a provincial schoolteacher, a Lutheran theologian manqué who raised a large family and spent his evenings inventing an algebra for space that nobody wanted to read.

Between lessons, political essays, Freemason meetings, and plans to evangelize China, he wrote the Ausdehnungslehre—a book that looks more like a private revelation than a textbook, weaving geometry, language, and scripture into one symbolic universe.​

The tormented romantic: Hamilton
William Rowan Hamilton lived in an observatory near Dublin, surrounded by telescopes, poetry, and unfulfilled love affairs.

He discovered quaternions in a flash of illumination on a bridge, carved the fundamental formula into the stone, and then spent years struggling with alcohol, domestic tensions, and a restless, almost alchemical desire to turn geometry into pure algebraic light.

The ethical freethinker: Clifford
William Kingdon Clifford was a brilliant, fragile Victorian who burned himself out by his mid‑thirties, trying to fuse mathematics, physics, and a new secular ethics.

He admired Grassmann, extended his ideas, and at the same time attacked traditional religion, arguing that even belief itself must obey strict moral laws—no faith without evidence, no comfort without intellectual honesty.

The private life of geometric algebra
Seen together, Grassmann, Hamilton, and Clifford form something like a hidden trinity of geometric algebra: the quiet mystic of extension, the romantic discoverer of hypercomplex numbers, and the ethical revolutionary who welded them into a single geometric language.

Their theories are usually presented as cold formalism, yet they were written under the pressure of illness, unrequited love, religious struggle, and a deep, almost esoteric conviction that space itself carries meaning.

Toward a mathematics of consciousness?
If geometric algebra unifies length, angle, and orientation into one symbolic field, it is tempting to imagine a future “extension theory” in which thoughts, qualia, and inner states are modeled as multivectors in a higher‑dimensional cognitive space.

A bold extrapolation is that consciousness might eventually be described as a kind of global geometric field over the brain and its environment, with attention, memory, and emotion appearing as special “directions” and “blades” in an unseen algebraic universe.

In such a picture, Grassmann’s extension, Hamilton’s quaternions, and Clifford’s geometric product would be early pages in a much larger Ausdehnungslehre der Seele—an algebra of mind where the geometry of space and the geometry of experience finally meet.

In this post we describe the exterior algebra, invented by Hermann Grassmann around 1844.

In “Tensors are geometric objects” and “`Tensors on a picnic”} we have defined contravariant, covariant, and mixed tensors, by their transformation properties under a change of a basis. Let V be an n-dimensional real vector space. The space of p-contravariant tensors is usually denoted \bigotimes^p V. The space of p-covariant tensors is denoted \bigotimes^p V^*. It can be interpreted in two ways, either as (\bigotimes^p V)^*, or as \bigotimes^p (V^*). Both are correct, since if t^* is p times covariant, we can interpret t^* as a linear form on p-times contravariant tensors by

    \[ <t^*,t>={t^*}_{i_1\ldots i_p}t^{i_1\ldots i_p},\]

and every linear form on \bigotimes^p V can be uniquely represented in this way.

In the literature we can find different notations, and it is good to know about them. I will partly employ the notation used by Marian Fecko in his book [1]. I truly love this book! Here is my review.
In this book T^p(V) is used for \bigotimes^p V, T_q(V) for \bigotimes^q V^*, T^p_q(V) for the space of p-contravarian and q-covariant tensors. They are called tensors of type \binom{p}{q}. Tensors of rype \binom{0}{p} that are completely antisymmetric are called p-forms.

If t_1,t_2,\ldots,t_p are in V, we denote by t_1\otimes t_2\otimes\cdots\otimes t_p the tensor t with components

    \[t^{i_1i_2\ldots i_p}=t_1^{i_1}t_2^{i_2}\cdots t_p^{i_p}.\]

Tensors of this form are called simple. Simple tensors span all of \bigotimes^p V. The same applies to covariant tensors.

The most important is the universal property of \bigotimes^p V: Every p-linear map f from V\times\cdots V (p times) to a vector space F extends to a unique linear map (denoted by the same letter f) from \bigotimes^p V to F, such that

    \[f(v_1,\ldots, v_p)= f(v_1\otimes\cdots\otimes v_p).\]

If e_i, (i=1,\ldots n) is a basis in V, then e_{i_1}\otimes\cdots\otimes e_{i_p} is a basis in \bigotimes^p V. Tensor indices refer to such a basis: if t\in\bigotimes^p V, then

    \[ t=t^{i_1\ldots i_p}e_{i_1}\otimes\cdots\otimes e_{i_p}.\]

Of special interest are (totally) antisymmetric (or “skew-symmetric”) tensors. These are tensors, covariant or contravariant) that change sign under odd permutations of their indices. Let \mathfrak{S}_p denote the permutation group in p elements. Then \mathfrak{S}_p acts on \bigotimes^p V by

    \[ \sigma\cdot \left(t^{i_1}\otimes\cdots\otimes t^{i_p}\right) = t^{\sigma^{-1}(i_1)}\otimes\cdots\otimes t^{\sigma^{-1}(i_1)},\quad\sigma\in\mathfrak{S}_p.\]

This is for simple tensors. For general tensors we have simply the permutation of indices. A tensor t in \bigotimes^p V is {\it antisymmetric} if, for every \sigma\in \mathfrak{S}_p,

    \[\sigma\cdot t=\epsilon_\sigma\, t,\]

where \epsilon_\sigma is the sign of \sigma, \epsilon_\sigma=+1 for \sigma even, \epsilon_\sigma = -1 if \sigma is odd.

We then define the antisymmetrization operator:

    \[ {\bf a}\cdot t=\sum_{\sigma\in \mathfrak{S}_p}\epsilon_\sigma\, (\sigma\cdot t).\]

We can also characterize antisymmetric tensors using generalized Kronecker deltas discussed in “It’s all about permutations“. Namely, t is antisymmetric if and only if

(0)   \[ \delta^{i_1\ldots i_p}_{j_1\ldots j_p}\,t^{j_1\ldots j_p}=p!\,t^{i_1\ldots i_p}.\]

The antisymmetrization operator can be then also written in components as

    \[ ({\bf a}\cdot t)^{i_1\ldots i_p}=\delta^{i_1\ldots i_p}_{j_1\ldots j_p}\,t^{j_1\ldots j_p}.\]

Exercise 1. Justify the two last statements.

There is also a variation of the antisymmetrization operator, denoted by \pi^A in [1]:

    \[ \pi^A(t)=\frac{1}{p!}{\bf a}\cdot t\]

for t\in T_p(V), and with {\bf a}\cdot operator defined above. It has the advantage that \pi^A\circ \pi^A=\pi^A. Then, for t\in T_p(V) we write

    \[ t_{[a\ldots b]}=\pi^A(t)_{a\ldots b}.\]

The same for t\in T^p(V).

The p–th exterior power \Lambda^p V

The vector space of p-fold antisymmetric covariant (resp. contravariant) tensors is denoted \Lambda^p V* (resp. \Lambda^p V), and is called p-th exterior power of V* – the space of p-forms (resp. p-vectors). Notice that for p>n the only antisymmetric tensor is the zero tensor. (Why?). The whole exterior algebra \Lambda V^* is then defined as a direct sum:

    \[ \Lambda V^*={\Lambda}^0 V^*\oplus{\Lambda}^1V^*\oplus\cdots\oplus\Lambda^p V^*,\]

where {\Lambda}^0 V^* is understood as the field \bR itself.
If e_i,\quad (i=1,\ldots,n) is a basis in V, and e^i is the dual basis in V^*, then

    \[e_{i_1\ldots i_p}=\delta^{j_1\ldots j_p}_{i_1\ldots i_p}e_{j_1}\otimes\cdots\otimes e_{j_p},\]

i_1<i_2<\cdots <i_p is a basis in {\Lambda}^p V, and

    \[e^{i_1\ldots i_p}=\delta_{j_1\ldots j_p}^{i_1\ldots i_p}e^{j_1}\otimes\cdots\otimes e^{j_p},\]

i_1<i_2<\cdots <i_p is a basis in {\Lambda}^p V^*.

The wedge product

We now define the algebra product in \Lambda V. For t\in{\Lambda}^p V and s\in{\Lambda}^q V we define t\wedge s \in {\Lambda}^{p+q} V by

    \[ t\wedge s=\frac{1}{p!q!}{\bf a}\cdot (t\otimes s),\]

or explicitly, in components:

(1)   \[ (t\wedge s)^{i_1\ldots i_pi_{p+1}\ldots i_{p+q}}=\frac{1}{p!q!} \delta^{i_1\ldots i_pi_{p+1}\ldots i_{p+q}}_{j_1\ldots j_pj_{p+1}\ldots j_{p+q}}\,t^{j_1\ldots j_p}\,s^{j_{p+1}\ldots j_{p+q}}.\]

The wedge product is then extended to the whole \Lambda V by linearity. Using the properties of Kronecker deltas it can be then shown that the product is associative, so that \Lambda V becomes and associative algebra (with unit 1) – the exterior (or Grassmann) algebra of V. To prove associativity we will need a little lemma. In fact it belongs to the post on Kronecker deltas, but while writing that post I forgot about this useful property while writing about them, so here it is:

Lemma 1. For any totally antisymmetric array t^{j_1\ldots j_r} we have

    \[\delta^{i_1\ldots i_r}_{j_1\ldots j_r}t^{j_1\ldots j_r}=r!\,t^{i_1\ldots i_r}.\]

The proof follows directly by applying Eq. (3) from the previous post and using antisymmetry of t – we get r! identical terms.

Proof of associativity.

For t\in\Lambda^p V^*, s\in \Lambda^q V^*, u\in\Lambda^r V^*, using (adapted) Eq. (1) we have

    \[ (t\wedge s)_{j_1\ldots j_pj_{p+1}\ldots j_{p+q}}=\frac{1}{p!q!} \delta_{j_1\ldots j_pj_{p+1}\ldots j_{p+q}}^{k_1\ldots k_pk_{p+1}\ldots k_{p+q}}\,t_{k_1\ldots p_p}\,s_{k_{p+1}\ldots k_{p+q}},\]

and

    \[ \begin{split}&((t\wedge s)\wedge u)_{i_1\ldots i_{p+q+r}}\\ &= \frac{1}{(p+q)!r!}\,\delta_{i_1\ldots i_{p+q+r}}^{j_1\ldots j_{p+q}j_{p+q+1}\ldots j_{p+q+r}}(t\wedge s)_{j_1\ldots j_{p+q}} u_{j_{p+q+1}\ldots j_{p+q+r}}\\ &=\frac{1}{(p+q)!p!q!r!}\delta_{i_1\ldots i_{p+q+r}}^{j_1\ldots j_{p+q+r}}\delta_{j_1\ldots j_{p+q}}^{k_1\ldots k_{p+q}}\,t_{k_1\ldots p_p}\,s_{k_{p+1}\ldots k_{p+q}}u_{j_{p+q+1}\ldots j_{p+q+r}}\\ &=\frac{1}{p!q!r!} \delta_{i_1\ldots i_{p+q+r}}^{k_1\ldots k_{p+q}j_{p+q+1}\ldots j_{p+q+r}}\,t_{k_1\ldots p_p}\,s_{k_{p+1}\ldots k_{p+q}}u_{j_{p+q+1}\ldots j_{p+q+r}}, \end{split}\]

where the last equality is obtained using Eq. (0), and the fact that \delta_{i_1\ldots i_{p+q+r}}^{j_1\ldots j_{p+q+r}} is antisymmetric in indices j_1,\ldots, j_{p+q}.
Calculating now, in a similar way, t\wedge (s\wedge u we obtain, as it is easy to guess, the same result. This proves the associativity of the wedge product.\qed

Remark The above definition (1) of the wedge product works fine for \bR and \bC, but may fail for a general field, when the factor 1/(p!q!) may cause the problem (for instance in the field of integers modulo 2 we have -1=+1 and 2!=0). Therefore mathematicians that like to be as general as possible, introduce the exterior algebra (of a “module over a commutative ring”) differently.

References
[1] Fecko, M., “Differential Geometry and Lie Groups for Physicists”, Cambridge University Press 2006.

Posted in Linear Algebra | Tagged , , , | 5 Comments

Kronecker generalized deltas and Levi-Civita epsilons

My last post “It’s all about permutations” was a complete failure. Really very bad. I copied, without much thinking, some formulas from “Mathematical Handbook For Scientists And Engineers“, Granino A. Korn and Theresa M. Korn, Dover 2000, and it was a disaster. Some formulas in this edition are simply wrong, some have typos, some are incomplete. I tried to save my time and, as a result, I lost a lot of time. I will try to remember not to repeat this mistake in the future.

On the other hand, while trying to make sure that I know the correct formulas, I learned some useful stuff. So I decided to present these Kronecker deltas and Levi-Civita epsilons the way I understand them now. But first of all I owe my Readers an explanation: why do I care about these permutations? I was supposed to discuss conformal structures, and here are all these epsilons and deltas instead! Why? Here is a short answer: in physics conformal structures became interesting due to conformal invariance of Maxwell equations. Maxwell equations can be elegantly written in terms of differential forms, and their conformal invariance can be deduced from this form. Moreover, by using differential forms we can go beyond the standard Maxwell equations and discuss pre-metric and nonlinear variations of electromagnetism. Differential forms that enter Maxwell equations can be naturally identified with antisymmetric covariant tensors and pseudo-tensor densities. And it is there that we need epsilons and deltas. So, here they are.

First comes the famous Kronecker delta \delta^i_j, i,j=1,\ldots,n. Everybody knows what it is – if written as a matrix, it is the identity matrix. The next question is: is it a tensor?

To answer this question we need to return to the \href{https://arkadiusz-jadczyk.eu/blog/2025/12/tensors-on-a-picnic/}{post on tensors}. It looks like we have one-covariant and one-contravariant tensor (sometimes we say that it is a (1,1) tensor). If \xi is a tensor of such a type, then, if we change a basis, its components should transform according to the rule:

    \[ \xi^{i'}_{j'} = A^{i'}_i A^j_{j'} \xi^i_j. \]

Now, let us set \xi^i_j = \delta^i_j. On the right-hand side we can use the “summation with delta” rule to get

    \[ \xi^{i'}_{j'} = A^{i'}_i A^j_{j'} \delta^i_j = A^{i'}_i A^i_{j'}. \]

But the matrix A^{i'}_i is the inverse of A^i_{i'}, therefore the result is

    \[ \xi^{i'}_{j'} = \delta^{i'}_{j'}. \]

So indeed, \delta^i_j is a tensor. A very special tensor, since it has exactly the same components in every basis. Sometimes such tensors are called “numerical”, or “absolute” tensors.

Let us now introduce “generalized Kronecker deltas”. For every integer r, 1\leq r\leq n, we define

(1)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \det\begin{pmatrix} \delta^{i_1}_{j_1} & \delta^{i_1}_{j_2} & \cdots & \delta^{i_1}_{j_r} \\ \delta^{i_2}_{j_1} & \delta^{i_2}_{j_2} & \cdots & \delta^{i_2}_{j_r} \\ \vdots & \vdots & \vdots & \vdots \\ \delta^{i_r}_{j_1} & \delta^{i_r}_{j_2} & \cdots & \delta^{i_r}_{j_r} \end{pmatrix}. \end{equation*}

Here, with S_r denoting the permutation group on r elements, the determinant of a square r\times r matrix A^i_{\,j} is defined by the formula

(2)   \begin{align*} \det(A) &= \sum_{\sigma\in S_r} \mathrm{sgn}(\sigma)\, A^1_{\sigma(1)} A^2_{\sigma(2)} \cdots A^r_{\sigma(r)} \notag\\ &= \sum_{\sigma\in S_r} \mathrm{sgn}(\sigma)\, A_1^{\sigma(1)} A_2^{\sigma(2)} \cdots A_r^{\sigma(r)}. \end{align*}

Here the obvious typos \sigma\{2\} have been corrected to \sigma(2).

Combining the two formulas we get

(3)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \sum_{\sigma\in S_r} \mathrm{sgn}(\sigma)\, \delta^{i_1}_{j_{\sigma(1)}} \cdots \delta^{i_r}_{j_{\sigma(r)}}. \end{equation*}

Interchanging two upper indices amounts to interchanging two rows of the matrix. Similarly, interchanging two lower indices amounts to interchanging two columns of the matrix. In both cases the determinant changes its sign. Therefore \delta^{i_1\ldots i_r}_{j_1\ldots j_r} is totally antisymmetric.

For instance

    \[ \delta^{ij}_{kl} = \delta^i_k \delta^j_l - \delta^i_l \delta^j_k. \]

It follows that if two upper indices i_k and i_l are equal, then \delta^{i_1\ldots i_r}_{j_1\ldots j_r}=0. The same happens when two lower indices are equal. Therefore, to get a nonzero result, all indices i_1,\ldots,i_r must be different, and all indices j_1,\ldots,j_r must be different. Since each of the indices has a value between 1 and n (inclusive), it is clear that, to get a non-identically-zero symbol, we must have r\leq n. Why?

To calculate a determinant we usually expand it according to a given row or a given column. Let us expand our determinant with respect to the first row. The result is:

(4)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \sum_{k=1}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{j_1\ldots \hat{j_k}\ldots j_r}, \end{equation*}

where the hat indicates a skipped index.

A similar formula holds if we expand according to the first column. The immediate consequence of this formula is that, for \delta^{i_1\ldots i_r}_{j_1\ldots j_r} to have a non-zero value, the system of upper indices i_1,\ldots, i_r must be a permutation of the lower indices j_1,\ldots,j_r. Every index i_k must appear (and only once) among the indices j_1,\ldots,j_r. (Why?) If we apply the same permutation to upper and lower indices, the value of \delta remains unchanged. (Why?)

Levi-Civita symbol

We define two symbols:

    \[ \epsilon^{i_1\ldots i_n} = \delta^{i_1\ldots i_n}_{1\ldots n}, \]

    \[ \epsilon_{i_1\ldots i_n} = \delta_{i_1\ldots i_n}^{1\ldots n}. \]

We call them “symbols” rather than tensors. We will see the reason for this soon. We notice immediately that \epsilon^{i_1\ldots i_n} is non-zero if and only if i_1,\ldots, i_n is a permutation of 1,\ldots,n. Its value is the sign of the permutation: +1 for an even permutation, and -1 for an odd permutation. The same applies to \epsilon_{i_1\ldots i_n}, therefore, numerically,

    \[ \epsilon_{i_1\ldots i_n} = \epsilon^{i_1\ldots i_n}, \]

and we instantly get

    \[ \delta^{i_1\ldots i_n}_{j_1\ldots j_n} = \epsilon^{i_1\ldots i_n} \epsilon_{j_1\ldots j_n}. \]

Combining Eqs. (1) and (2) we deduce, for any real or complex n\times n matrix A, that the following two formulas hold:

(5)   \begin{equation*} \epsilon^{i_1\ldots i_n} A_{i_1}^{j_1} \cdots A_{i_n}^{j_n} = \det(A)\, \epsilon^{j_1\ldots j_n}. \end{equation*}

and

(6)   \begin{equation*} \epsilon_{i_1\ldots i_n} A^{i_1}_{j_1} \cdots A^{i_n}_{j_n} = \det(A)\, \epsilon_{j_1\ldots j_n}. \end{equation*}

With that we can now address the question: is \epsilon a tensor? We recall from \href{https://arkadiusz-jadczyk.eu/blog/2025/12/densities/}{Densities} that a  density of weight w has an extra factor in its transformation rule, namely |\det(A)|^{-w} if the basis transforms as e_{i'} = e_i A^i_{i'}. On the other hand a pseudo-tensor has an extra factor \mathrm{sgn}(\det(A)). Rewriting now Eq. (6) as

    \[ A^{i_1}_{i_1'} \cdots A^{i_n}_{i_n'} \epsilon_{i_1\ldots i_n} = \det(A)\, \epsilon_{i_1'\ldots i_n'}, \]

we deduce that \epsilon_{i_1\ldots i_n} is a pseudo-tensor of type (0,n) and density of weight -1. Similarly \epsilon^{i_1\ldots i_n} is a pseudo-tensor of type (n,0) and density of weight +1. (Can you see it?)

But, of course, both behave as bona fide tensors if we restrict transformations to the group \mathrm{SL}(n,\bR).

\noindent Delta’s contractions.

We can use Eq. (4) to get (How?)

    \[ \delta^{i_1 i_2\ldots i_r}_{i_1 j_2\ldots j_r} = (n-r+1)\, \delta^{i_2\ldots i_r}_{j_2\ldots j_r}. \]

Repeating this process, we get a more general formula

(1)   \begin{equation*} \delta^{i_1 i_2\ldots i_s i_{s+1}\ldots i_r}_{i_1 i_2\ldots i_s j_{s+1} \ldots j_r} = (n-r+1)\cdots(n-r+s)\, \delta^{i_{s+1}\ldots i_r}_{j_{s+1}\ldots j_r}. \end{equation*}

Afternotes

11-1-2026 10:40 It seems that proving the formula

    \[\delta^{i_1i_2\ldots i_r}_{i_1j_2\ldots j_r}=(n-r+1)\delta^{i_2\ldots i_r}_{j_2\ldots j_r}\]

is not that easy as I thought. It requires some patience and skill with juggling the nasty indices. So here is the derivation. We use (4):

((4))   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \sum_{k=1}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{j_1\ldots \hat{j_k}\ldots j_r}, \end{equation*}

and split the sum over k into the term k=1 and the rest:

(2)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r}=\delta^{i_1}_{j_1}\delta^{i_2\ldots  i_r}_{j_2\ldots  j_r}+\sum_{k=2}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{j_1\ldots \hat{j_k}\ldots j_r}. \end{equation*}

Now we set j_1=i_1, and sum over i_1 from 1 to n:

(3)   \begin{equation*} \delta^{i_1i_2\ldots i_r}_{j_1\ldots j_r}=\delta^{i_1}_{i_1}\delta^{i_2\ldots i_k\ldots i_r}_{j_2\ldots \hat{j_k}\ldots j_r}+\sum_{k=2}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{i_1\ldots \hat{j_k}\ldots j_r} =I+II. \end{equation*}

We use \delta^{i_1}_{i_1}=n , so

    \[ I=n\,\delta^{i_2\ldots i_k\ldots i_r}_{j_2\ldots \hat{j_k}\ldots j_r}.\]

In the remaining sum we sum over i_1 with \delta^{i_1}_{j_k}, which amounts to replacing i_1 by j_k:

    \[II=\sum_{k=2}^r (-1)^{k-1} \delta^{i_2\ldots i_k\ldots i_r}_{j_k\ldots \hat{j_k}\ldots j_r}.\]

The sum II has r-1 terms II_k, k=2,\ldots r. Consider first II_2:

    \[ II_2=(-1)^{2-1}\delta^{i_2\ldots i_r}_{j_2\ldots j_r}= =-\delta^{i_2\ldots i_r}_{j_2\ldots j_r}.\]

Consider now II_3.

    \[ II_3=(-1)^{3-1}\delta^{i_2\ldots i_r}_{j_3j_2\ldots j_r}= -\delta^{i_2\ldots i_r}_{j_2j_3\ldots j_r}.\]

ALL of the k-1 terms will be identical. Transpositions of j_k from the first position to its regular position, where it was omitted, provide the needded signs.

Posted in Linear Algebra | Tagged , , | 17 Comments

It’s all about permutations

Permutations appear whenever “things are the same, but in a different order.” That simple idea hides a powerful engine that underlies physics, geometry, information, and even the notion of identity itself.

In quantum mechanics, identical particles do not carry little name tags. Two electrons swapped in space give a state that is “the same up to a phase.” The swap is a permutation, and the phase is a one-dimensional representation of the permutation group on two letters.

For bosons, permutations act trivially: exchange two bosons and the wavefunction is unchanged.

For fermions, odd permutations contribute a minus sign, and the Pauli exclusion principle is encoded in this antisymmetry.

More exotic anyons in two dimensions correspond to richer representations of braid groups, which generalize permutations by letting trajectories wind around each other.

Thus whole classes of matter are distinguished by how their quantum states transform under permutations of identical particles. The statistics of the universe are, at base, choices about how permutation symmetry is represented.

In topology, a covering space has fibers over each point; going around a loop downstairs lifts to a permutation of the fiber upstairs. The resulting homomorphism from the fundamental group to a permutation group is the monodromy.

in differential geometry, the orientation of a frame is encoded in whether the change of basis is an even or odd permutation of a fixed oriented basis, wrapped in a continuous transformation.

Permutations are not just a toy in elementary group theory; they are the algebraic shadow of a deeper idea: that the universe is less about what things and relations are, and more about how indistinguishable pieces can be rearranged to create complexity.

In the future we will use a bunch of classical formulas often used when dealing with antisymmetric tensors (multivectors and differential forms). They can be easily found online, but here I am simply sharing just two pages from “Mathematical Handbook For Scientists And Engineers“, Granino A. Korn and Theresa M. Korn, Dover 2000 (or: Г. Корн, Т. Корн, “Справочник По Математике Для Научных Работников И Инженеров”, «Наука» 1973)Some formulas are better in the Russian translation/edition:

Exercise 1. I asked Perplexity AI to write for me all properties of deltas and epsilons. One of the formulas Perplexity wrote was this one:
If a metric g_{ij} exists,

(1)   \begin{equation*} \varepsilon^{i_1 \dots i_n} = g^{i_1 j_1} \cdots g^{i_n j_n} \varepsilon_{j_1 \dots j_n}. \end{equation*}

Is it correct? If not, what should be written instead?

Note. Formula 16.5-4, as it is written in the English Dover edition is WRONG! Why?

Posted in Linear Algebra | Tagged , , | 10 Comments