From SU(1,1) to the Lorentz group

From the two-dimensional disk we are moving to three-dimensional space-time. We will meet Einstein-Poincare-Minkowski special relativity, though in a baby version, with x and y, but without z in space. It is not too bad, because the famous Lorentz transformations, with length contraction and time dilation happen already in two-dimensional space-time, with x and t alone. We will discover Lorentz transformations today. First in disguise, but then we will unmask them.

First we recall, from The disk and the hyperbolic model, the relation between the coordinates (x,y) on the Poincare disk x^2+y^2<1, and (X,Y,T) on the unit hyperboloid T^2-X^2-Y^2=1.

Space-time hyperboloid and the Poincare disk models

(1)   \begin{eqnarray*} X&=&\frac{2x}{1-x^2-y^2},\\ Y&=&\frac{2y}{1-x^2-y^2},\\ T&=&\frac{1+x^2+y^2}{1-x^2-y^2}. \end{eqnarray*}

(2)   \begin{eqnarray*} x&=&\frac{X}{1+T},\\ y&=&\frac{Y}{1+T}. \end{eqnarray*}

We have the group SU(1,1) acting on the disk with fractional linear transformations. With z=x+iy and A in SU(1,1)

(3)   \begin{equation*}A=\begin{bmatrix}\lambda&\mu\\ \nu&\rho\end{bmatrix},\end{equation*}

the fractional linear action is

(4)   \begin{equation*}A:z\mapsto z_1=\frac{\rho z+\nu}{\mu z+\lambda}.\end{equation*}

By the way, we know from previous notes that A is in SU(1,1) if and only if

(5)   \begin{equation*}\nu=\bar{\mu},\,\rho=\bar{\lambda},\quad|\lambda|^2-|\mu|^2=1.\end{equation*}

Having the new point on the disk, with coordinates (x_1,y_1) we can use Eq. (1) to calculate the new space-time point coordinates (X_1,Y_1,T_1). This is what we will do now. We will see that even if z_1 depends on z in a nonlinear way, the space-time coordinates transform linearly. We will calculate the transformation matrix L(A), and express it in terms of \lambda and \mu. We will also check that this is a matrix in the group SO(1,2).

The program above involves algebraic calculations. Doing them by hand is not a good idea. Let me recall a quote from Gottfried Leibniz, who, according to Wikipedia

He became one of the most prolific inventors in the field of mechanical calculators. While working on adding automatic multiplication and division to Pascal’s calculator, he was the first to describe a pinwheel calculator in 1685[13] and invented the Leibniz wheel, used in the arithmometer, the first mass-produced mechanical calculator. He also refined the binary number system, which is the foundation of virtually all digital computers.

“It is unworthy of excellent men to lose hours like slaves in the labor of calculation which could be relegated to anyone else if machines were used.”
— Gottfried Leibniz

I used Mathematica as my machine. The same calculations can be certainly done with Maple, or with free software like Reduce or Maxima. For those interested, the code that I used, and the results can be reviewed as a separate HTML document: From SU(1,1) to Lorentz.

Here I will provide only the results. It is important to notice that while the matrix A has complex entries, the matrix L(A) is real. The entries of L(A) depend on real and imaginary parts of \lambda and \mu

(6)   \begin{equation*}\lambda=\lambda_r+i\lambda_i,\, \mu=\mu_r+i\mu_i.\end{equation*}

Here is the calculated result for L(A):

(7)   \begin{equation*}L(A)=\begin{bmatrix}   -\lambda_i^2+\lambda_r^2-\mu_i^2+\mu_r^2 & 2 \lambda_i \lambda_r-2 \mu_i \mu_r & 2 \lambda_r \mu_r-2 \lambda_i \mu_i \\  -2 \lambda_i \lambda_r-2 \mu_i \mu_r & -\lambda_i^2+\lambda_r^2+\mu_i^2-\mu_r^2 & -2 \lambda_r \mu_i-2 \lambda_i \mu_r \\  2 \lambda_i \mu_i+2 \lambda_r \mu_r & 2 \lambda_i \mu_r-2 \lambda_r \mu_i & \lambda_i^2+\lambda_r^2+\mu_i^2+\mu_r^2  \end{bmatrix}\end{equation*}

In From SU(1,1) to Lorentz it is first verified that the matrix L(A) is of determinant 1. Then it is verified that it preserves the Minkowski space-time metric. With G defined as

(8)   \begin{equation*}G=\begin{bmatrix}-1&0&0\\0&-1&0\\0&0&1\end{bmatrix}\end{equation*}

we have

(9)   \begin{equation*}L(A)GL(A)^T=G.\end{equation*}

Since L(A)_{3,3}= \lambda_i^2+\lambda_r^2+\mu_i^2+\mu_r^2 =1+2|\mu|^2\geq 1>0, the transformation L(A) preserves the time direction. Thus L(A) is an element of the proper Lorentz group \mathrm{SO}^{+}(1,2).

Remark: Of course we could have chosen G with (+1,+1,-1) on the diagonal. We would have the group SO(2,1), and we would write the hyperboloid as X^2+Y^2-T^2=-1. It is a question of convention.

In SU(1,1) straight lines on the disk we considered three one-parameter subgroups of SU(1,1):

(10)   \begin{eqnarray*} X_1&=&\begin{bmatrix}0&i\\-i&0\end{bmatrix},\\ X_2&=&\begin{bmatrix}0&1\\1&0\end{bmatrix},\\ X_3&=&\begin{bmatrix}i&0\\0&-i\end{bmatrix}. \end{eqnarray*}

(11)   \begin{eqnarray*} A_1(t)&=& \exp(tX_1)=\begin{bmatrix}\cosh(t)&i\sinh(t)\\-i\sinh(t)&\cosh(t)\end{bmatrix},\\ A_2(t)&=& \exp(tX_2)=\begin{bmatrix}\cosh(t)&\sinh(t)\\ \sinh(t)&\cosh(t)\end{bmatrix},\\ A_3(t)&=& \exp(tX_3)=\begin{bmatrix}e^{it}&0\\ 0&e^{-it}\end{bmatrix}. \end{eqnarray*}

We can now use Eq. (7) in order to see which space-time transformations they implement. Again I calculated it obeying Leibniz and using a machine (see From SU(1,1) to Lorentz).

A replica of the Stepped Reckoner of Leibniz form 1923 (original is in the Hannover Landesbibliothek)

Here are the results of the machine work:

(12)   \begin{equation*}L_1(t)=L(A_1(t))=\begin{bmatrix}  1 & 0 & 0 \\  0 & \cosh (2 t) & -\sinh (2 t) \\  0 & -\sinh (2 t) & \cosh (2 t)\end{bmatrix},\end{equation*}

(13)   \begin{equation*}L_2(t)=L(A_2(t))=\begin{bmatrix}  \cosh (2 t) & 0 & \sinh (2 t) \\  0 & 1 & 0 \\  \sinh (2 t) & 0 & \cosh (2 t) \end{bmatrix},\end{equation*}

(14)   \begin{equation*}L_3(\phi)=L(A_3(\phi))=\begin{bmatrix}  \cos (2 \phi ) & \sin (2 \phi ) & 0 \\  -\sin (2 \phi ) & \cos (2 \phi ) & 0 \\  0 & 0 & 1 \\ \end{bmatrix},\end{equation*}

The third family is a simple Euclidean rotation in the (X,Y) plane. That is why I denoted the parameter with the letter \phi. In order to “decode” the first two one-parameter subgroups it is convenient to introduce new variable v and set 2t=\mathrm{arctanh}(v). The group property L(t_1)L(t_2)=L(t_1+t_2) is then lost, but the matrices become evidently those of special Lorentz transformations, L_1(v) transforming Y and T, leaving X unchanged, and L_2(v) transforming (X,T) and leaving Y unchanged (though with a different sign of v). Taking into account the identities

(15)   \begin{eqnarray*} \cosh(\mathrm{arctanh} (v))&=&\frac{1}{\sqrt{1-v^2}},\\ \sinh(\mathrm{arctanh} (v))&=&\frac{v}{\sqrt{1-v^2}} \end{eqnarray*}

we get

(16)   \begin{equation*}L_1(v)=\begin{bmatrix}  1 & 0 & 0 \\  0 & \frac{1}{\sqrt{1-v^2}} & -\frac{v}{\sqrt{1-v^2}} \\  0 & -\frac{v}{\sqrt{1-v^2}} & \frac{1}{\sqrt{1-v^2}}\end{bmatrix},\end{equation*}

(17)   \begin{equation*}L_2(v)=\begin{bmatrix} \frac{1}{\sqrt{1-v^2}} & 0 & \frac{v}{\sqrt{1-v^2}} \\  0 & 1 & 0 \\ \frac{v}{\sqrt{1-v^2}} & 0 & \frac{1}{\sqrt{1-v^2}} \end{bmatrix},\end{equation*}

In the following posts we will use the relativistic Minkowski space distance on the hyperboloid for finding the distance formula on the Poincare disk.

Putting a spin on mistakes

I am reading Leopold Infeld, “QUEST: The Evolution of a Scientist“, Readers Union Limited, London 1942. There is a little piece there about errors that drew my attention. The story takes place in Princeton:

The next day I met Robertson in Fine Hall and told him :
” I am convinced now that gravitational waves do not exist. I believe I am able to show it in a very brief way.” Robertson was still sceptical :
” I don’t believe you,” and he suggested a more detailed dis­cussion. He took the two pages on which I had written my proof and read it through.
“The idea is O.K. There must be some trivial mistake in your calculations.”
He began quickly and efficiently to check all the steps of my argument, even the most simple ones, comparing the results on the blackboard with those in my notes. The beginning checked beauti­fully. I marvelled at the quickness and sureness with which Robertson performed all the computations. Then, near the end, there was a small discrepancy. He got plus where I got minus. We checked and rechecked the
point; Robertson was right. At one place I had made a most trivial mistake, in spite of having repeated the calculations three times ! Such a mistake must have as simple an explanation as a Freudian slip in writing or language. Subconsciously I did not believe in the result and my first doubts were still there. But I wanted to prove it in a simpler way than had Einstein. Thus I had to cheat myself. Such mistakes happen often, but they are usually caught before papers are printed. The author comes back. many times to his work. before he sees it. in print and has, therefore, plenty of opportunity to gain a more detached attitude and to apply the criteria of logic which were repressed during the emotional strain of creation.
Thus my whole result went to pieces. Although it was a very small matter, I felt very downcast at the moment, as does every scientist in similar circumstances even though he has accustomed himself to disappointment. He feels the collapse of hope, the death of the creation of his brain. It is like the succession of birth and death.
Robertson tried to console me :
” It happens to everyone. The most trivial mistakes are always most difficult to detect.”

Interesting! But who is this Robertson?

From Wikipedia:

Howard Percy “Bob” Robertson (January 27, 1903 – August 26, 1961) was an American mathematician and physicist known for contributions related to physical cosmology and the uncertainty principle. He was Professor of Mathematical Physics at the California Institute of Technology and Princeton University.

After the war Robertson was director of the Weapons System Evaluation Group in the Office of the Secretary of Defense from 1950 to 1952, chairman of the Robertson Panel on UFO’s in 1953 and Scientific Advisor to the NATO Supreme Allied Commander Europe (SACEUR) in 1954 and 1955. He was Chairman of the Defense Science Board from 1956 to 1961, and a member of the President’s Science Advisory Committee (PSAC) from 1957 to 1961.

… The Robertson Panel recommended that a public education campaign should be undertaken in order to reduce public interest in the subject, minimising the risk of swamping Air Defence systems with reports at critical times, and that civilian UFO groups should be monitored.
A number of criticisms have been leveled at the Robertson panel. In particular that the panel’s study of the phenomena was relatively perfunctory and its conclusions largely predetermined by the earlier CIA review of the UFO situation.

Robertson Panel was a device used by the CIA to establish a cover program (Blue Book) that would draw attention away from a covert program designed to meet the UFO challenge.

Richard M. Dolan, Jacques F. Vallee,
UFOs and the National Security State: Chronology of a Coverup, 1941-1973, Hampton Roads 2002

So, it seems that H. P. Robertson knew not only how to avoid mistakes, but he also knew how to mislead others.

As for me, I am very good in the mistakes making business. My ideas are most of the time correct, but when it comes to their implementation, I am making a lot of mistakes. At every step. That is probably because I am impatient. So, when I want to get some result, I am going as quickly as I can to the end – just to see that the method is working, even if the end result is wrong. And the method is working if it produces result, because there are many methods that lead to no result at all. And when I have a method that is producing results, then I start working on removing mistakes. Usually I check the end result using several different methods. It is rather improbable that the same mistake will show up when you use different methods.
But then when I have everything working, I need to write it down, in all details. And that is pain. That is where I am making more mistakes.
Why is it so? It must be something more than the lack of patience. It is not a Freudian slip, I am sure, though sometimes it may be the case….

Anyway it is time now to go back to calculations, where making mistakes is so easy.

Let me recall from the previous post Pauli, rotations and quaternions.

We consider the group \mathrm{SU}(2) of complex 2\times 2 unitary matrices of determinant 1. We have Hermitian traceless matrices s_1,s_2,s_3 given by Eq. (1) in Pauli, rotations and quaternions. To each vector \vec{v}=(v_1,v_2,v_3) we associate Hermitian matrix
\vec{v}\cdot\vec{s}, as in Eq. (10) in the previous post. We have orthogonal transformation R defined by


We know that R is orthogonal. We would like to know that \det R=1, and that is where we left in the last note.
One way of proving that R has determinant 1 is to express R in terms of the elements of the matrix U. We know (see the previous post) that U has the form

(1)   \begin{equation*}\begin{bmatrix}a&-\bar{c}\\c&\bar{a},\end{bmatrix}\end{equation*}

where |a|^2+|c|^2=1. It is now a straightforward calculation to express R in terms of a and c. But the calculation is long, and it is easy to make a mistake. That is why I used REDUCE , an excellent free computer algebra software (and then double checked with Mathematica). The REDUCE file doing the calculation is here. And here is the end result:

(2)   \begin{align*} R_{11}&=\frac{a^2+\bar{a}^2-c^2-\bar{c}^2}{2},\\ R_{12}&=i\,\frac{a^2-\bar{a}^2-c^2+\bar{c}^2}{2},\\ R_{13}&=a\bar{c}+\bar{a}c,\\ R_{21}&=i\,\frac{\bar{a}^2-a^2+\bar{c}^2-c^2}{2},\\ R_{22}&= \frac{a^2+\bar{a}^2+c^2+\bar{c}^2}{2},\\ R_{23}&=i\,(\bar{a}c-a\bar{c}),\\ R_{31}&=-ac-\bar{a}\bar{c},\\ R_{32}&=i\,(\bar{a}\bar{c}-ac),\\ R_{33}&=a\bar{a}-c\bar{c}. \end{align*}

Or in terms of quaternionic components W,X,Y,Z as in Eq. (9) of the previous post:

(3)   \begin{equation*} \begin{bmatrix} W^2+X^2-Y^2-Z^2&2(XY-WZ)&2(WY+XZ)\\ 2(WZ+XY)&W^2-X^2+Y^2-Z^2&2(YZ-WX)\\ 2(XZ-WY)&2(WX+YZ)&W^2-X^2-Y^2+Z^2 \end{bmatrix}. \end{equation*}

The REDUCE file producing the above matrix is here.
Then I used REDUCE again to verify that R defined as above is indeed unitary and has determinant one. The REDUCE file is here here.

It takes a while to write the code and to debug it, but then producing the result is almost instant.

There is one more property that we need to verify. We have the map U\mapsto R=R(U), that maps \mathrm{SU}(2) matrices into orthogonal matrices of determinant 1. But we would like to be sure that every rotation R can be obtained in this way. How to do it? One way would be to calculate a and c from the matrix R using the formulas above. But that would not be easy. We have four real numbers and nine quadratic equations. There is a smarter and more useful method. Here is how we do it: suppose that we know that every rotation R, that is every orthogonal matrix of determinant 1, is a rotation about some axis by some angle (and it is true). That is we can represent R as

    \[ R=\exp(\theta W(\vec{k})),\]

where \vec{k} is a unit vector. That was discussed in the note Spin – we know that we do not know.
From the Rodrigues rotation formula  discussed in this note we can calculate R explicitly.
On the other hand we can calculate explicitly U=\exp(i\frac{\theta}{2}\vec{k}\cdot \vec{s}) and then R(U). It is then the question of a straightforward calculation to verify that we get our R:

(4)   \begin{equation*}\exp(\theta W(\vec{k}))=R\left(\exp(i\frac{\theta}{2}\vec{k}\cdot \vec{s})\right).\end{equation*}

The calculation is simple but long. REDUCE code that does it is here.

Notes on Ehresmann’s connections -Part 2, Frölicher–Nijenhuis bracket

In this note I shall introduce the Frölicher-Nijenhuis bracket – a powerful tool, especially when dealing with connections on fibre bundles.
Let  (E,\pi,M) be a fibre bundle. In the previous note we have seen that the connection form  \omega is a one-form on   E with values in  TE (in fact, in values in  VE\subset TE.) This leads us to a more general subject of natural operations on tangent valued-forms.

Let therefore, for now,  E be any differentiable manifold, and let us consider the space of tangent valued forms on  E. Let  \Omega (E)=\oplus_{i=0}^n \Omega^{i}(E) be the graded algebra of ordinary differential forms on  E. Then the (graded) space  \Omega (E,TE) of tangent-valued forms can be written as

 \Omega(E,TE)=\Omega(E)\otimes_E T(E).

In the following we will use the capital indices  A,B,C\ldots to refer to local charts  x^A of  E.

The elements of  \Omega^{0}(E,TE) are simply vector fields  \xi^A on  E. The elements of  \Omega^{1}(E,TE) are tangent-valued one-forms. If \Phi is such a form, then \Phi assigns to each tangent vector \xi\in T_pE another tangent vector  \Phi(\xi)\in T_pE. In coordinates it is represented as  \Phi^A_B. In general, an element of  \Omega^{r}(E,TE) is represented by  \Phi^A_{B_1,\ldots B_r}, antisymmetric in indices  B_1,\ldots B_r, or

 \Phi= \frac{1}{r!}\,\Phi^A_{B_1,\ldots B_r}\,dx^{B_1}\wedge\ldots\wedge dx^{B_r}\,\partial_A.

In the following it will be convenient to introduce the following notation:


We will write

 \Phi=\sum \Phi^A_B\,d^B\otimes \partial_A,

where  B=(1\leq B_1 < B_2 < \ldots < B_r\leq\mathrm{dim }\,E),  d^B=dx^{B_1}\wedge\ldots\wedge dx^{B^r} and \partial_A=\frac{\partial}{\partial x^A}.

Operations on differential forms

Frölicher-Nijenhuis (FN) bracket will assign to any two tangent valued forms \Phi\in \Omega^{r}(E,TE),\,\Psi\in \Omega^{s}(E,TE) another form  [\Phi,\Psi]\in \Omega^{(r+s)}(E,TE).

We will see (in the following posts) that the curvature of a connection can be simply expressed in terms of the FN bracket of the connection form with itself. Also Bianchi identities for the curvature will follow immediately from this definition and from the properties of the FN bracket. But first let us recall the three fundamental operations on differential forms  \Phi\in\Omega(E). These are: exterior derivative, Lie derivative, insertion. Exterior derivative  d:\omega\mapsto d\omega maps  \Omega^{r}(E) to  \Omega^{(r+1)}(E) according to the formula

 d\Phi(X_0,\ldots ,X_r)=\sum_{i=0}^r(-1)^iX_i(\Phi(X_0,\ldots,\hat{X_i},\ldots ,X_r)).

Given a vector field  X\in\mathfrak{X} (E) the insertion operator  i_X maps  \Omega^{r}(E) to  \Omega^{(r-1)}(E) according to the formula


Lie derivative  \mathfrak{L}_X maps  \Omega^{r}(E) into itself, the definition being


These three operations on number-valued differential forms have the following important properties (note: the commutators are graded commutators):

  • d(\omega\wedge\eta)=d\omega\wedge\eta+(-1)^{|\omega|}\omega\wedge d\eta
  •  i_X(\omega\wedge\eta)=i_X\omega\wedge\eta+(-1)^{|\omega|}\omega\wedge i_X\eta
  •  \mathfrak{L}_X(\omega\wedge\eta)=\mathfrak{L}_X\omega\wedge\eta+(-1)^{|\omega|}\omega\wedge \mathfrak{L}_X \eta
  •  d^2=0
  •   [i_X,i_Y]=i_Xi_Y+i_Yi_X=0
  •  [\mathfrak{L}_X,d]=\mathfrak{L}_X\circ d+d\circ\mathfrak{L}_X=0
  •  [i_X,d]=i_X\circ d+d\circ\ i_X=\mathfrak{L}_X
  •  [\mathfrak{L}_X,\mathfrak{L}_Y]=\mathfrak{L}_X\circ\mathfrak{L}_Y-\mathfrak{L}_Y\circ\mathfrak{L}_X=\,\mathfrak{L}_{[X,Y]}
  •  [\mathfrak{L}_X,i_Y]=\mathfrak{L}_X i_Y-i_Y\mathfrak{L}_X=i_{[X,Y]}
  • If f is a function on E, then \mathfrak{L}_{fX}\omega=f\mathfrak{L}_X\omega+df\wedge i_X\omega

Frölicher–Nijenhuis bracket

Frölicher-Nijenhuis bracket associates to each pair of tangent-valued forms on  E the third tangent-valued form. Because of its bilinearity it is enough to define it on simple tensors. For \alpha\in\Omega^{r}(E),\,\beta\in\Omega^{s}(E), and for  X,Y\in \mathfrak{X}(E) it is defined by the following formula:

     \begin{eqnarray*} [\alpha\otimes X,\,\beta\otimes Y]&=&(\alpha\wedge\beta)\otimes [X,Y]\\&+&(\alpha\wedge\mathfrak{L}_X\beta)\otimes Y\\&-&(\mathfrak{L}_Y\alpha\wedge\beta)\otimes X\\&+&(-1)^r(d\alpha\wedge i_X\beta)\otimes Y\\&+&(-1)^r(i_Y\alpha\wedge d\beta)\otimes X\end{eqnarray*}

In coordinates the Frölicher-Nijenhuis bracket is given by the following explicit expression:

     \begin{eqnarray*} [\Phi,\Psi]&=&\left(\Phi^C_{B_1\ldots B_r}\partial_C\Psi^A_{B_{r+1}\ldots B_{r+s}}\right.\\ &-&(-1)^{rs}\Psi^C_{B_1\ldots B_s}\partial_C\Phi^A_{B_{s+1}\ldots B_{r+s}}\\ & -&r\Phi^A_{B_1\ldots B_{r-1} C}\partial_{B_r}\Psi^C_{B_{r+1}\ldots B_{r+s}}\\ &+&(-1)^{rs}s\Psi^A_{CB_{1}\ldots B_{s-1}}\partial_{B_{s}}\Phi^C_{B_{s+1}\ldots B_{r+s}}\left.\right)\,d^B\otimes\partial_A\end{eqnarray*}

If we want to write the sam formula in a more explicit form, then antisymmetrizations and combinatorial factors enter:

     \begin{align*} &[\Phi,\Psi](X_1,\dots X_{r+s})=\\ &=\frac{1}{r!s!}\sum_\sigma (-1)^\sigma [\Phi(X_{\sigma 1}\ldots X_{\sigma r}),\Psi(X_{\sigma(r+1)}\ldots X_{\sigma(r+s)})] \\ &+(-1)^r\left( \frac{1}{r!(s-1)!}\sum_\sigma (-1)^\sigma\Psi([X_{\sigma1},\Phi(X_{\sigma2},\ldots ,X_{\sigma (r+1)})],X_{\sigma (r+2)},\ldots)\right.\\ &\left.-\frac{1}{(r-1)!(s-1)!2!}\sum_\sigma(-1)^\sigma\Psi(\Phi([X_{\sigma 1},X_{\sigma 2}],X_{\sigma 3},\ldots),X_{\sigma (r+2)},\ldots) \right)\\ &-(-1)^{rs+s}\left(\frac{1}{(r-1)!s!}\sum_\sigma (-1)^\sigma\Phi([X_{\sigma 1},\Psi(X_{\sigma 2},\ldots ,X_{\sigma (s+1)})],X_{\sigma (s+2)},\ldots)\right .\\ &-\left.\frac{1}{(r-1)!(s-1)!2!}\sum_\sigma(-1)^\sigma\Phi(\Psi([X_{\sigma 1},X_{\sigma 2}],X_{\sigma 3},\ldots ),,X_{\sigma (s+2)},\ldots )\right) \end{align*}

As an exercise let us compute  [id_E,\beta\otimes Y], where  id_E=dx^A\otimes\partial_A is the identity tangent-valued form (the natural soldering form), with coordinates \delta^A_B. Thus we set  r=1. The first term gives

 [dx^A\otimes\partial_A,\Psi]^A_{A_1\ldots A_{r+1}}=\delta^B_{A_1}\partial_B \Psi^A_{A_2\ldots A_{s+1}}=\partial^A_{A_1}\Psi_{A_2\ldots A_{s+1}}.

The second and fourth terms vanishs because  \delta^A_B are constant.

The third is

 -\delta^A_B\partial_{A_1}\Psi^B_{A_2\ldots A_{s+1}}=-\partial_{A_1}\Psi^A_{A_2\ldots A_{s+1}}

It follows that  [dx^A\otimes\partial_A,\Psi]=0 – therefore  id_E commutes with every tangent valued form.

The most important properties of the FN bracket are the following ones

  •  [\Phi,\Psi]=-(-1)^{|\Phi|\,|\Psi|}\,[\Psi,\Phi]\quad – graded antisymmetry
  •  [\Phi_1,[\Phi_2,\Phi_3]]=[[\Phi_1,\Phi_2],\Phi_3]+(-1)^{|\Phi_1|\,\|\Phi_2|}[\Phi_2,[\Phi_1,\Phi_3]] – Jacobi identity

Suppose now  X is a vector field. It can be also considered as a tangent-valued 0-form:  X=1\otimes X. We then find that

 [X,\beta\otimes Y]=\beta\otimes [X,Y]+\mathfrak{L}_X\beta\otimes Y=\mathfrak{L}_X(\beta\otimes Y).

Therefore for vector fields the FN bracket reduces to an ordinary Lie derivative.

Another insight into the nature of the FN bracket comes from considering tangent-valued forms as operators on diferential forms. First of all, for any  \Phi \in \Omega^{r+1}(E,TE) we can define a graded derivation  i_\Phi:\,\Omega^{s}E\rightarrow \Omega^{r+s} E by the formula:

     \begin{align*} (i_\Phi\, \omega)&(X_1,\ldots,X_{r+s})=\\ &\frac{1}{(r+1)!(s-1)!}\sum_{\sigma\in S_{r+s}}(-1)^\sigma\,\omega(\Phi(X_{\sigma 1},\ldots ,X_{\sigma (r+1)}),X_{\sigma (r+2)},\ldots) \end{align*}

Now I have a problem. In references [1] and [2] we have the following local formulas:

Screenshot from: Michor, Topics in Differential Geometry

Screenshot from: Kolar, Michor, Slovak,  Natural Operations in Differential Geometry

In our notation this would be:

 i_\Phi\omega=\sum\,\Phi^A_{B_1\ldots B_{r+1}}\,\omega_{AB_{r+2}\ldots B_{r+s}}\,d^B

But I am not able to reproduce this result. It seems to me that the factor s is missing in front of the expression on the RHS. I will return to this enigma at the end of this note.

Remark. Notice that in these formulas we assume  \Phi \in \Omega^{r+1}(E,TE) and not in  \Omega^{r}(E,TE)

Then we define the Lie derivative  \mathfrak{L}_\Phi of a differential form \omega\in\Omega^s E along a tangent-valued form  \Phi\in \Omega^r(E,TE) by the formula

 \mathfrak{L}_\Phi\, \omega =i_\Phi\,d\omega-di_\Phi\,\omega


    \begin{align*} \mathfrak{L}_\Phi\,\omega=&\sum\,\left(\Phi^A_{B_1\ldots B_r}\partial_A\,\omega_{B_{r+1}\ldots B_{r+s}}\right.\\ &\left.+(-1)^r(\partial_{B_1}\Phi^A_{B_2\ldots B_{r+1}})\,\omega_{AB_{r+2}\ldots B_{r+s}}\right) d^B \end{align*}


    \begin{align*} ( &\mathfrak{L}_{\Phi}\,\omega)(X_1,\ldots ,X_{r+s})=\\ &=\frac{1}{r!s!}\,\sum_\sigma\,(-1)^\sigma \mathfrak{L}_{\Phi (X_{\sigma 1},\ldots ,X_{\sigma r})}(\omega(X_{\sigma (r+1)},\ldots,X_{\sigma (r+s)}))\\ &+(-1)^r\left(\frac{1}{r!(s-1)!}\sum_\sigma (-1)^\sigma\,\omega([X_{\sigma 1},\Phi(X_{\sigma 2},\ldots,X_{\sigma (r+1)})],X_{\sigma (r+2)},\ldots) \right.\\ &\left.-\frac{1}{(r-1)!(s-1)!2!}\sum_\sigma\,(-1)^\sigma\,\omega(\Phi([X_{\sigma 1},X_{\sigma 2}],X_{\sigma 3},\ldots),X_{\sigma (r+2)},\ldots)\right) \end{align*}

Again it is instructive to calculate the action of  i_{\Phi} and of this extended Lie derivative for \Phi=id_E. Using the local formula from [1] or [2]

 i_\Phi\omega=\sum\,\Phi^A_{B_1\ldots B_r}\,\omega_{AB_{r+1}\ldots B_{r+s}}\,dx^B

we obtain that for  \Phi=id_E, that is for \Phi^A_B=\delta^A_B, we have


But the identity map is not a graded derivative (of degree 0 in this case). On the other hand, from the formula involving vector fields I am getting

i_\phi\omega = s\omega

This looks good, because then

i_\Phi(\omega_1\wedge\omega_2)=(s_1\omega_1)\wedge\omega_2+\omega_1\wedge (s_2\omega_1)=(s_1+s_2)\omega_1\wedge\omega_2

as it should be.

At this point I have to pause until this point is clarified. Perhaps I am missing something evident? I don’t know.

And here are the conventions concerning differential forms used in [1] and [2]:

Differential Forms - Conventions in "Topics in Differential Geometry"


[1] Peter W. Michor, Topics in Differential Geometry (see hxxp:// )
[2] Ivan Kolar, Peter W. Michor, Jan Slovak, Natural Operations in Differentila Geometry (see hxxp:// )
[3] Andreas Kriegl, Peter W. Michor, The Convenient Setting for Global Analysis, Ch. 33.18