My last post “It’s all about permutations” was a complete failure. Really very bad. I copied, without much thinking, some formulas from “Mathematical Handbook For Scientists And Engineers“, Granino A. Korn and Theresa M. Korn, Dover 2000, and it was a disaster. Some formulas in this edition are simply wrong, some have typos, some are incomplete. I tried to save my time and, as a result, I lost a lot of time. I will try to remember not to repeat this mistake in the future.

On the other hand, while trying to make sure that I know the correct formulas, I learned some useful stuff. So I decided to present these Kronecker deltas and Levi-Civita epsilons the way I understand them now. But first of all I owe my Readers an explanation: why do I care about these permutations? I was supposed to discuss conformal structures, and here are all these epsilons and deltas instead! Why? Here is a short answer: in physics conformal structures became interesting due to conformal invariance of Maxwell equations. Maxwell equations can be elegantly written in terms of differential forms, and their conformal invariance can be deduced from this form. Moreover, by using differential forms we can go beyond the standard Maxwell equations and discuss pre-metric and nonlinear variations of electromagnetism. Differential forms that enter Maxwell equations can be naturally identified with antisymmetric covariant tensors and pseudo-tensor densities. And it is there that we need epsilons and deltas. So, here they are.
First comes the famous Kronecker delta
,
. Everybody knows what it is – if written as a matrix, it is the identity matrix. The next question is: is it a tensor?
To answer this question we need to return to the \href{https://arkadiusz-jadczyk.eu/blog/2025/12/tensors-on-a-picnic/}{post on tensors}. It looks like we have one-covariant and one-contravariant tensor (sometimes we say that it is a
tensor). If
is a tensor of such a type, then, if we change a basis, its components should transform according to the rule:
![]()
Now, let us set
. On the right-hand side we can use the “summation with delta” rule to get
![]()
But the matrix
is the inverse of
, therefore the result is
![]()
So indeed,
is a tensor. A very special tensor, since it has exactly the same components in every basis. Sometimes such tensors are called “numerical”, or “absolute” tensors.
Let us now introduce “generalized Kronecker deltas”. For every integer
,
, we define
(1) 
Here, with
denoting the permutation group on
elements, the determinant of a square
matrix
is defined by the formula
(2) 
Here the obvious typos
have been corrected to
.
Combining the two formulas we get
(3) ![]()
Interchanging two upper indices amounts to interchanging two rows of the matrix. Similarly, interchanging two lower indices amounts to interchanging two columns of the matrix. In both cases the determinant changes its sign. Therefore
is totally antisymmetric.
For instance
![]()
It follows that if two upper indices
and
are equal, then
. The same happens when two lower indices are equal. Therefore, to get a nonzero result, all indices
must be different, and all indices
must be different. Since each of the indices has a value between
and
(inclusive), it is clear that, to get a non-identically-zero symbol, we must have
. Why?
To calculate a determinant we usually expand it according to a given row or a given column. Let us expand our determinant with respect to the first row. The result is:
(4) 
where the hat indicates a skipped index.
A similar formula holds if we expand according to the first column. The immediate consequence of this formula is that, for
to have a non-zero value, the system of upper indices
must be a permutation of the lower indices
. Every index
must appear (and only once) among the indices
. (Why?) If we apply the same permutation to upper and lower indices, the value of
remains unchanged. (Why?)
Levi-Civita symbol
We define two symbols:
![]()
![]()
We call them “symbols” rather than tensors. We will see the reason for this soon. We notice immediately that
is non-zero if and only if
is a permutation of
. Its value is the sign of the permutation:
for an even permutation, and
for an odd permutation. The same applies to
, therefore, numerically,
![]()
and we instantly get
![]()
Combining Eqs. (1) and (2) we deduce, for any real or complex
matrix
, that the following two formulas hold:
(5) ![]()
and
(6) ![]()
With that we can now address the question: is
a tensor? We recall from \href{https://arkadiusz-jadczyk.eu/blog/2025/12/densities/}{Densities} that a density of weight
has an extra factor in its transformation rule, namely
if the basis transforms as
. On the other hand a pseudo-tensor has an extra factor
. Rewriting now Eq. (6) as
![]()
we deduce that
is a pseudo-tensor of type
and density of weight
. Similarly
is a pseudo-tensor of type
and density of weight
. (Can you see it?)
But, of course, both behave as bona fide tensors if we restrict transformations to the group
.
\noindent Delta’s contractions.
We can use Eq. (4) to get (How?)
![]()
Repeating this process, we get a more general formula
(1) ![]()
Afternotes
11-1-2026 10:40 It seems that proving the formula
![]()
is not that easy as I thought. It requires some patience and skill with juggling the nasty indices. So here is the derivation. We use (4):
((4)) 
and split the sum over
into the term
and the rest:
(2) 
Now we set
and sum over
from
to
:
(3) 
We use
, so
![]()
In the remaining sum we sum over
with
, which amounts to replacing
by
:
![Rendered by QuickLaTeX.com \[II=\sum_{k=2}^r (-1)^{k-1} \delta^{i_2\ldots i_k\ldots i_r}_{j_k\ldots \hat{j_k}\ldots j_r}.\]](https://arkadiusz-jadczyk.eu/blog/wp-content/ql-cache/quicklatex.com-dce81fd570a9dfe26ecf13a68c12b674_l3.png)
The sum
has
terms
,
Consider first ![]()
![]()
Consider now ![]()
![]()
ALL of the
terms will be identical. Transpositions of
from the first position to its regular position, where it was omitted, provide the needded signs.
Ark, it seems to me that you’re being too hard on yourself. The post “It’s All About Permutations” was one of the most helpful for me: it reminded me how to deal with those scaring objects with multiple indices. As far as I remember, Penrose even introduced the notion of “arms and legs” in an attempt to make tensors easier to work with. But your explanations are more instructive; after thinking on the exercises, I truly felt more confident.
The very idea of permutations is very profound, since it is not about quantities but rather about ordering, which is another face of a number.
And finally, in no case, the mysterious holographic images of the differently rearranged cats appearing in that post can be called a failure.
Moving to the present post, please take a look; there might be a typo here:
“But the matrix A^{i’}_i is the inverse of A^i_{j’}” –>
“But the matrix A^{i’}_i is the inverse of A^i_{i’}”,
with reference to the “Tensors on a picnic” post.
Thanks!
Yeah the things Ark is hard on himself for are things I wouldn’t be able to notice anyways while the posts themselves are quite interesting. The idea that say CL(4) has 16 dimensions while the permutation of 4 things is 24 yet both have odd/even grade/parity is a little perplexing for me but apparently explaining the details for the Clifford algebra case does get into permutations and the grading is kind of just an end result that works. That Ark looks greatly at the details is great for me since things like number theory parallels are what draws me in; it certainly wasn’t my relatively basic math education.
I used the think I’d never understand even a little about things like EEQT and differential geometry beyond something like EEQT being GRW-like. Now EEQT being GRW-like seems like a very minor detail and I even see Ark’s EEQT and conformal structures work as related. I used to think of them as totally unrelated. It’s fun talking to AI about Ark’s overall work. It’s totally safe to bring up Ark to AI; Ark’s wife Laura and Ark’s friend Tony not so much. I now introduce Tony by introducing Ark first.
I’m probably interested in contravariant vs covariant indices related things for Born reciprocity position momentum phase space reasons but this may be more for future me at best. For Bradonjic unimodular volume form reasons, I like restricting to SL(n,R) though for Born reciprocity reasons I might like restricting even more to SP(n,R) and I’m not sure the pseudo tensors being treated as tensors still holds in this case. Course I’m not even sure what goes wrong parity-wise if you have pseudo tensors instead of tensors.
Thanks. SP is a subgroup of SL, so anything that transforms as a tensor with respect to SL, transforms also as a tensor with respect to SP.
Ark, there are a few places where bold and italic fonts are not working correctly, e.g.:
“The next question is: {\bf is it a tensor?}” and several cases below.
Now I am trying to grasp formulas (1) and (2) and how should we use them to get (3). Isn’t it easier just to take (3) as a definition?
As regards Why?s:
“to get a non-identically-zero symbol, we must have r n, then two or more of indices would have to coincide and we will get zero.
“Every index i_k must appear (and only once) among the indices j_1, j_r. Why?”
If no i_k appears among indices “on the other floor”, then, in view of (3), all (1,1)-deltas on the rhs would have different upper and lower indices, and hence no one of them could be nonzero. And if some i_k appear among j_l twice or more times, then we are lack of j_l=k, which was already used once to give delta^ik_jk and hence all other deltas have different upper and lower indices.
“If we apply the same permutation to upper and lower indices, the value of delta remains unchanged. Why?”
That is because each permultation (of upper or lower indices) only changes the sign of the result. If we make any permutation twice or any even number of times, we will get the same result.
“Ark, there are a few places where bold and italic fonts are not working correctly,”
Fixed. Thank you!
“Isn’t it easier just to take (3) as a definition?”
Sure. That would do as well. I was hesitating while writing this post.
And your answers to “why’s” are perfect. Thanks.
“We can use Eq. (4) to get ({\bf How?})”
It is not evident for me how to use (4), I would rather use 16.5-5 from the Korn’s book,
which states that r-rank deltas are related to s-rank deltas with the factor (n-s)!/(n-r)!
In our case, s = r-1, we have coefficicient (n-r+1)!/(n-r)! = n-r+1, and that is exactly what is wanted.
As far as tensor densities are concerned, it is a new subject to me. So, the understanding is only intuitive but not rigorous at the moment.
“I would rather use 16.5-5 from the Korn’s book”
One should not rely on formulas form books. Some of them may be false. That is what I have learned. You can use (4) by splitting the sum into two parts – one with delta^i1_j1 and the rest . Contracting i1 and j1 will give you n. Contracting the rest will give you r-1. All these terms will be identical, so you will get n-(r-1) in front of the remaining delta.
Thank you for such a skillful explanation! I should think a little bit more about it.
In fact, I have some hesitation about “All these terms will be identical”.
Here we are actually calculating determinant of an (r x r) matrix and relate it to that of the (r-1) x (r-1) matrix, right? In doing so one usually multiply elements of the upper row by the corresponding minors, and all these minors are different! Is it simplified in our case because all minors of a matrix made of deltas are idendentical?
Thanks. I added the reasoning in Afternotes at the end of the post. Let me know if it is clear now?
The explanation is crystal clear, thank you very much. I was close to a solution, but I lacked persistence and confidence to work with deltas so freely.
Ark, I looked through the reasoning carefully, it is all clear, there is only one minor remark: when you write the sum for k=1, it is a bit misleading that k still stands there in the expression explicitly, in i_k and j_k. Wouldn’t it be better to omit these indices from the first sum?
You are perfectly right. Fixed. Thank you!
Ark, I continue searching into relations between pi and CFT. A very useful book is Gelfand, Graev, Vilenkinhttps://ikfia.ysn.ru/wp-content/uploads/2018/01/GelfandGraevVilenkin1962ru.pdf. It contains notions which have much in common with those considered in your recent posts.
I make some notes but fail to boil them into a smooth text… Still working with it.
Thank you very much, Anna! Good that you have reminded me of this book. But can you tell me some more detail about your idea? Which part of this book attracts your particular interest at the moment?
At first I was interested only in the 3rd chapter about groups of complex matrices as groups of motions of Lobachevsky space and related topics. And definitely skipped the first two chapters about Radon transform; indeed, what is Radon transform, who knows anything about it? 🙂 But then, after some dizzying turns, I have suddenly driven out on the X-ray transform! And this one is closely related to the Radon’s one. I would like to tell you more, but my notes are very chaotic now, I should put them in some order before showing you.