Kronecker generalized deltas and Levi-Civita epsilons

My last post “It’s all about permutations” was a complete failure. Really very bad. I copied, without much thinking, some formulas from “Mathematical Handbook For Scientists And Engineers“, Granino A. Korn and Theresa M. Korn, Dover 2000, and it was a disaster. Some formulas in this edition are simply wrong, some have typos, some are incomplete. I tried to save my time and, as a result, I lost a lot of time. I will try to remember not to repeat this mistake in the future.

On the other hand, while trying to make sure that I know the correct formulas, I learned some useful stuff. So I decided to present these Kronecker deltas and Levi-Civita epsilons the way I understand them now. But first of all I owe my Readers an explanation: why do I care about these permutations? I was supposed to discuss conformal structures, and here are all these epsilons and deltas instead! Why? Here is a short answer: in physics conformal structures became interesting due to conformal invariance of Maxwell equations. Maxwell equations can be elegantly written in terms of differential forms, and their conformal invariance can be deduced from this form. Moreover, by using differential forms we can go beyond the standard Maxwell equations and discuss pre-metric and nonlinear variations of electromagnetism. Differential forms that enter Maxwell equations can be naturally identified with antisymmetric covariant tensors and pseudo-tensor densities. And it is there that we need epsilons and deltas. So, here they are.

First comes the famous Kronecker delta \delta^i_j, i,j=1,\ldots,n. Everybody knows what it is – if written as a matrix, it is the identity matrix. The next question is: is it a tensor?

To answer this question we need to return to the \href{https://arkadiusz-jadczyk.eu/blog/2025/12/tensors-on-a-picnic/}{post on tensors}. It looks like we have one-covariant and one-contravariant tensor (sometimes we say that it is a (1,1) tensor). If \xi is a tensor of such a type, then, if we change a basis, its components should transform according to the rule:

    \[ \xi^{i'}_{j'} = A^{i'}_i A^j_{j'} \xi^i_j. \]

Now, let us set \xi^i_j = \delta^i_j. On the right-hand side we can use the “summation with delta” rule to get

    \[ \xi^{i'}_{j'} = A^{i'}_i A^j_{j'} \delta^i_j = A^{i'}_i A^i_{j'}. \]

But the matrix A^{i'}_i is the inverse of A^i_{i'}, therefore the result is

    \[ \xi^{i'}_{j'} = \delta^{i'}_{j'}. \]

So indeed, \delta^i_j is a tensor. A very special tensor, since it has exactly the same components in every basis. Sometimes such tensors are called “numerical”, or “absolute” tensors.

Let us now introduce “generalized Kronecker deltas”. For every integer r, 1\leq r\leq n, we define

(1)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \det\begin{pmatrix} \delta^{i_1}_{j_1} & \delta^{i_1}_{j_2} & \cdots & \delta^{i_1}_{j_r} \\ \delta^{i_2}_{j_1} & \delta^{i_2}_{j_2} & \cdots & \delta^{i_2}_{j_r} \\ \vdots & \vdots & \vdots & \vdots \\ \delta^{i_r}_{j_1} & \delta^{i_r}_{j_2} & \cdots & \delta^{i_r}_{j_r} \end{pmatrix}. \end{equation*}

Here, with S_r denoting the permutation group on r elements, the determinant of a square r\times r matrix A^i_{\,j} is defined by the formula

(2)   \begin{align*} \det(A) &= \sum_{\sigma\in S_r} \mathrm{sgn}(\sigma)\, A^1_{\sigma(1)} A^2_{\sigma(2)} \cdots A^r_{\sigma(r)} \notag\\ &= \sum_{\sigma\in S_r} \mathrm{sgn}(\sigma)\, A_1^{\sigma(1)} A_2^{\sigma(2)} \cdots A_r^{\sigma(r)}. \end{align*}

Here the obvious typos \sigma\{2\} have been corrected to \sigma(2).

Combining the two formulas we get

(3)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \sum_{\sigma\in S_r} \mathrm{sgn}(\sigma)\, \delta^{i_1}_{j_{\sigma(1)}} \cdots \delta^{i_r}_{j_{\sigma(r)}}. \end{equation*}

Interchanging two upper indices amounts to interchanging two rows of the matrix. Similarly, interchanging two lower indices amounts to interchanging two columns of the matrix. In both cases the determinant changes its sign. Therefore \delta^{i_1\ldots i_r}_{j_1\ldots j_r} is totally antisymmetric.

For instance

    \[ \delta^{ij}_{kl} = \delta^i_k \delta^j_l - \delta^i_l \delta^j_k. \]

It follows that if two upper indices i_k and i_l are equal, then \delta^{i_1\ldots i_r}_{j_1\ldots j_r}=0. The same happens when two lower indices are equal. Therefore, to get a nonzero result, all indices i_1,\ldots,i_r must be different, and all indices j_1,\ldots,j_r must be different. Since each of the indices has a value between 1 and n (inclusive), it is clear that, to get a non-identically-zero symbol, we must have r\leq n. Why?

To calculate a determinant we usually expand it according to a given row or a given column. Let us expand our determinant with respect to the first row. The result is:

(4)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \sum_{k=1}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{j_1\ldots \hat{j_k}\ldots j_r}, \end{equation*}

where the hat indicates a skipped index.

A similar formula holds if we expand according to the first column. The immediate consequence of this formula is that, for \delta^{i_1\ldots i_r}_{j_1\ldots j_r} to have a non-zero value, the system of upper indices i_1,\ldots, i_r must be a permutation of the lower indices j_1,\ldots,j_r. Every index i_k must appear (and only once) among the indices j_1,\ldots,j_r. (Why?) If we apply the same permutation to upper and lower indices, the value of \delta remains unchanged. (Why?)

Levi-Civita symbol

We define two symbols:

    \[ \epsilon^{i_1\ldots i_n} = \delta^{i_1\ldots i_n}_{1\ldots n}, \]

    \[ \epsilon_{i_1\ldots i_n} = \delta_{i_1\ldots i_n}^{1\ldots n}. \]

We call them “symbols” rather than tensors. We will see the reason for this soon. We notice immediately that \epsilon^{i_1\ldots i_n} is non-zero if and only if i_1,\ldots, i_n is a permutation of 1,\ldots,n. Its value is the sign of the permutation: +1 for an even permutation, and -1 for an odd permutation. The same applies to \epsilon_{i_1\ldots i_n}, therefore, numerically,

    \[ \epsilon_{i_1\ldots i_n} = \epsilon^{i_1\ldots i_n}, \]

and we instantly get

    \[ \delta^{i_1\ldots i_n}_{j_1\ldots j_n} = \epsilon^{i_1\ldots i_n} \epsilon_{j_1\ldots j_n}. \]

Combining Eqs. (1) and (2) we deduce, for any real or complex n\times n matrix A, that the following two formulas hold:

(5)   \begin{equation*} \epsilon^{i_1\ldots i_n} A_{i_1}^{j_1} \cdots A_{i_n}^{j_n} = \det(A)\, \epsilon^{j_1\ldots j_n}. \end{equation*}

and

(6)   \begin{equation*} \epsilon_{i_1\ldots i_n} A^{i_1}_{j_1} \cdots A^{i_n}_{j_n} = \det(A)\, \epsilon_{j_1\ldots j_n}. \end{equation*}

With that we can now address the question: is \epsilon a tensor? We recall from \href{https://arkadiusz-jadczyk.eu/blog/2025/12/densities/}{Densities} that a  density of weight w has an extra factor in its transformation rule, namely |\det(A)|^{-w} if the basis transforms as e_{i'} = e_i A^i_{i'}. On the other hand a pseudo-tensor has an extra factor \mathrm{sgn}(\det(A)). Rewriting now Eq. (6) as

    \[ A^{i_1}_{i_1'} \cdots A^{i_n}_{i_n'} \epsilon_{i_1\ldots i_n} = \det(A)\, \epsilon_{i_1'\ldots i_n'}, \]

we deduce that \epsilon_{i_1\ldots i_n} is a pseudo-tensor of type (0,n) and density of weight -1. Similarly \epsilon^{i_1\ldots i_n} is a pseudo-tensor of type (n,0) and density of weight +1. (Can you see it?)

But, of course, both behave as bona fide tensors if we restrict transformations to the group \mathrm{SL}(n,\bR).

\noindent Delta’s contractions.

We can use Eq. (4) to get (How?)

    \[ \delta^{i_1 i_2\ldots i_r}_{i_1 j_2\ldots j_r} = (n-r+1)\, \delta^{i_2\ldots i_r}_{j_2\ldots j_r}. \]

Repeating this process, we get a more general formula

(1)   \begin{equation*} \delta^{i_1 i_2\ldots i_s i_{s+1}\ldots i_r}_{i_1 i_2\ldots i_s j_{s+1} \ldots j_r} = (n-r+1)\cdots(n-r+s)\, \delta^{i_{s+1}\ldots i_r}_{j_{s+1}\ldots j_r}. \end{equation*}

Afternotes

11-1-2026 10:40 It seems that proving the formula

    \[\delta^{i_1i_2\ldots i_r}_{i_1j_2\ldots j_r}=(n-r+1)\delta^{i_2\ldots i_r}_{j_2\ldots j_r}\]

is not that easy as I thought. It requires some patience and skill with juggling the nasty indices. So here is the derivation. We use (4):

((4))   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r} = \sum_{k=1}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{j_1\ldots \hat{j_k}\ldots j_r}, \end{equation*}

and split the sum over k into the term k=1 and the rest:

(2)   \begin{equation*} \delta^{i_1\ldots i_r}_{j_1\ldots j_r}=\delta^{i_1}_{j_1}\delta^{i_2\ldots  i_r}_{j_2\ldots  j_r}+\sum_{k=2}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{j_1\ldots \hat{j_k}\ldots j_r}. \end{equation*}

Now we set j_1=i_1, and sum over i_1 from 1 to n:

(3)   \begin{equation*} \delta^{i_1i_2\ldots i_r}_{j_1\ldots j_r}=\delta^{i_1}_{i_1}\delta^{i_2\ldots i_k\ldots i_r}_{j_2\ldots \hat{j_k}\ldots j_r}+\sum_{k=2}^r (-1)^{k-1} \delta^{i_1}_{j_k} \delta^{i_2\ldots i_k\ldots i_r}_{i_1\ldots \hat{j_k}\ldots j_r} =I+II. \end{equation*}

We use \delta^{i_1}_{i_1}=n , so

    \[ I=n\,\delta^{i_2\ldots i_k\ldots i_r}_{j_2\ldots \hat{j_k}\ldots j_r}.\]

In the remaining sum we sum over i_1 with \delta^{i_1}_{j_k}, which amounts to replacing i_1 by j_k:

    \[II=\sum_{k=2}^r (-1)^{k-1} \delta^{i_2\ldots i_k\ldots i_r}_{j_k\ldots \hat{j_k}\ldots j_r}.\]

The sum II has r-1 terms II_k, k=2,\ldots r. Consider first II_2:

    \[ II_2=(-1)^{2-1}\delta^{i_2\ldots i_r}_{j_2\ldots j_r}= =-\delta^{i_2\ldots i_r}_{j_2\ldots j_r}.\]

Consider now II_3.

    \[ II_3=(-1)^{3-1}\delta^{i_2\ldots i_r}_{j_3j_2\ldots j_r}= -\delta^{i_2\ldots i_r}_{j_2j_3\ldots j_r}.\]

ALL of the k-1 terms will be identical. Transpositions of j_k from the first position to its regular position, where it was omitted, provide the needded signs.

This entry was posted in Linear Algebra and tagged , , . Bookmark the permalink.

17 Responses to Kronecker generalized deltas and Levi-Civita epsilons

  1. Anna says:

    Ark, it seems to me that you’re being too hard on yourself. The post “It’s All About Permutations” was one of the most helpful for me: it reminded me how to deal with those scaring objects with multiple indices. As far as I remember, Penrose even introduced the notion of “arms and legs” in an attempt to make tensors easier to work with. But your explanations are more instructive; after thinking on the exercises, I truly felt more confident.

    The very idea of ​​permutations is very profound, since it is not about quantities but rather about ordering, which is another face of a number.

    And finally, in no case, the mysterious holographic images of the differently rearranged cats appearing in that post can be called a failure.

    Moving to the present post, please take a look; there might be a typo here:
    “But the matrix A^{i’}_i is the inverse of A^i_{j’}” –>
    “But the matrix A^{i’}_i is the inverse of A^i_{i’}”,
    with reference to the “Tensors on a picnic” post.

  2. John G says:

    Yeah the things Ark is hard on himself for are things I wouldn’t be able to notice anyways while the posts themselves are quite interesting. The idea that say CL(4) has 16 dimensions while the permutation of 4 things is 24 yet both have odd/even grade/parity is a little perplexing for me but apparently explaining the details for the Clifford algebra case does get into permutations and the grading is kind of just an end result that works. That Ark looks greatly at the details is great for me since things like number theory parallels are what draws me in; it certainly wasn’t my relatively basic math education.

    I used the think I’d never understand even a little about things like EEQT and differential geometry beyond something like EEQT being GRW-like. Now EEQT being GRW-like seems like a very minor detail and I even see Ark’s EEQT and conformal structures work as related. I used to think of them as totally unrelated. It’s fun talking to AI about Ark’s overall work. It’s totally safe to bring up Ark to AI; Ark’s wife Laura and Ark’s friend Tony not so much. I now introduce Tony by introducing Ark first.

    I’m probably interested in contravariant vs covariant indices related things for Born reciprocity position momentum phase space reasons but this may be more for future me at best. For Bradonjic unimodular volume form reasons, I like restricting to SL(n,R) though for Born reciprocity reasons I might like restricting even more to SP(n,R) and I’m not sure the pseudo tensors being treated as tensors still holds in this case. Course I’m not even sure what goes wrong parity-wise if you have pseudo tensors instead of tensors.

  3. Anna says:

    Ark, there are a few places where bold and italic fonts are not working correctly, e.g.:
    “The next question is: {\bf is it a tensor?}” and several cases below.

    Now I am trying to grasp formulas (1) and (2) and how should we use them to get (3). Isn’t it easier just to take (3) as a definition?

    As regards Why?s:

    “to get a non-identically-zero symbol, we must have r n, then two or more of indices would have to coincide and we will get zero.

    “Every index i_k must appear (and only once) among the indices j_1, j_r. Why?”

    If no i_k appears among indices “on the other floor”, then, in view of (3), all (1,1)-deltas on the rhs would have different upper and lower indices, and hence no one of them could be nonzero. And if some i_k appear among j_l twice or more times, then we are lack of j_l=k, which was already used once to give delta^ik_jk and hence all other deltas have different upper and lower indices.

    “If we apply the same permutation to upper and lower indices, the value of delta remains unchanged. Why?”

    That is because each permultation (of upper or lower indices) only changes the sign of the result. If we make any permutation twice or any even number of times, we will get the same result.

    • arkajad says:

      “Ark, there are a few places where bold and italic fonts are not working correctly,”

      Fixed. Thank you!

      “Isn’t it easier just to take (3) as a definition?”

      Sure. That would do as well. I was hesitating while writing this post.

      And your answers to “why’s” are perfect. Thanks.

  4. Anna says:

    “We can use Eq. (4) to get ({\bf How?})”
    It is not evident for me how to use (4), I would rather use 16.5-5 from the Korn’s book,
    which states that r-rank deltas are related to s-rank deltas with the factor (n-s)!/(n-r)!
    In our case, s = r-1, we have coefficicient (n-r+1)!/(n-r)! = n-r+1, and that is exactly what is wanted.

    As far as tensor densities are concerned, it is a new subject to me. So, the understanding is only intuitive but not rigorous at the moment.

    • arkajad says:

      “I would rather use 16.5-5 from the Korn’s book”
      One should not rely on formulas form books. Some of them may be false. That is what I have learned. You can use (4) by splitting the sum into two parts – one with delta^i1_j1 and the rest . Contracting i1 and j1 will give you n. Contracting the rest will give you r-1. All these terms will be identical, so you will get n-(r-1) in front of the remaining delta.

      • Anna says:

        Thank you for such a skillful explanation! I should think a little bit more about it.

      • Anna says:

        In fact, I have some hesitation about “All these terms will be identical”.
        Here we are actually calculating determinant of an (r x r) matrix and relate it to that of the (r-1) x (r-1) matrix, right? In doing so one usually multiply elements of the upper row by the corresponding minors, and all these minors are different! Is it simplified in our case because all minors of a matrix made of deltas are idendentical?

  5. Anna says:

    The explanation is crystal clear, thank you very much. I was close to a solution, but I lacked persistence and confidence to work with deltas so freely.

  6. Anna says:

    Ark, I looked through the reasoning carefully, it is all clear, there is only one minor remark: when you write the sum for k=1, it is a bit misleading that k still stands there in the expression explicitly, in i_k and j_k. Wouldn’t it be better to omit these indices from the first sum?

  7. Anna says:

    Ark, I continue searching into relations between pi and CFT. A very useful book is Gelfand, Graev, Vilenkinhttps://ikfia.ysn.ru/wp-content/uploads/2018/01/GelfandGraevVilenkin1962ru.pdf. It contains notions which have much in common with those considered in your recent posts.
    I make some notes but fail to boil them into a smooth text… Still working with it.

    • arkajad says:

      Thank you very much, Anna! Good that you have reminded me of this book. But can you tell me some more detail about your idea? Which part of this book attracts your particular interest at the moment?

      • Anna says:

        At first I was interested only in the 3rd chapter about groups of complex matrices as groups of motions of Lobachevsky space and related topics. And definitely skipped the first two chapters about Radon transform; indeed, what is Radon transform, who knows anything about it? 🙂 But then, after some dizzying turns, I have suddenly driven out on the X-ray transform! And this one is closely related to the Radon’s one. I would like to tell you more, but my notes are very chaotic now, I should put them in some order before showing you.

Leave a Reply