Real magic – space-time in Lie algebra

We start with a partial recall of events as they transpired so far. A month ago we became hyperbolic. The post Getting hyperbolic started with this sentence:

Without knowing it, during the last three posts (Our first field expedition, Our second field expedition, The Third Expedition) we became hyperbolic. Hyperbolic and conformal. Conformal and relativistic, relativistic and non-Euclidean.

Then we introduced the group SU(1,1) and its action on the disk. Then, in From SU(1,1) to the Lorentz group we realized that the disk can be considered as a projection of a hyperboloid in 2+1 dimensional space-time, where the hyperboloid is inside the future light cone. We were contemplating the image below

Space-time hyperboloid and the Poincare disk models

and we have shown that there is a group homomorphism from SU(1,1) to the Lorentz group SO(2,1) of 2+1 dimensional special relativity. We have calculated the formula explicitly in Eq. (7) there. If A=\left[\begin{smallmatrix}\lambda&\mu\\ \bar{\mu}&\bar{\lambda}\end{smallmatrix}\right] is a complex matrix from SU(1,1) with \lambda=\lambda_r+i\lambda_i,\,\mu=\mu_r+i\mu_i split into real and imaginary parts, then the real 3\times 3 matrix L(A) from SO(2,1) is given by the formula:

(1)   \begin{equation*}L(A)=\begin{bmatrix}  -\lambda_i^2+\lambda_r^2-\mu_i^2+\mu_r^2 & 2 \lambda_i \lambda_r-2 \mu_i \mu_r & 2 \lambda_r \mu_r-2 \lambda_i \mu_i \\  -2 \lambda_i \lambda_r-2 \mu_i \mu_r & -\lambda_i^2+\lambda_r^2+\mu_i^2-\mu_r^2 & -2 \lambda_r \mu_i-2 \lambda_i \mu_r \\  2 \lambda_i \mu_i+2 \lambda_r \mu_r & 2 \lambda_i \mu_r-2 \lambda_r \mu_i & \lambda_i^2+\lambda_r^2+\mu_i^2+\mu_r^2 \end{bmatrix}.\end{equation*}

The map A\mapsto L(A) is a group homomorphism, that is L(I)=I and L(A_1A_2)=L(A_1)L(A_2). ( Of course in L(I)=I we denote by the same symbol "I" identity matrices of different sizes.)

After that, in Getting real, we used Cayley transform that defines group isomorphism between the complex matrix group SU(1,1) and the real matrix group SL(2,R). With

(2)   \begin{equation*}\mathcal{C}=\frac{1}{1+i}\begin{bmatrix}i&1\\-i&1\end{bmatrix}.\end{equation*}

(3)   \begin{equation*}\mathcal{C}^{-1}=\frac{1}{1-i}\begin{bmatrix}-i&i\\1&1\end{bmatrix}.\end{equation*}

we have that if A'=\left[\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right] is a real matrix from SL(2,R) (i.e. \det A'=\alpha\delta-\beta\gamma=1), then \mathcal{C}A'\mathcal{C}^{-1} is in SU(1,1). If we calculate explicitly \lambda and \mu in terms of \alpha,\beta,\gamma,\delta, the result is:

(4)   \begin{eqnarray*} \lambda_r&=&\frac{\alpha+\delta}{2},\\ \lambda_i&=&\frac{\beta-\gamma}{2},\\ \mu_r&=&\frac{\delta-\alpha}{2},\\ \mu_i&=&\frac{\beta+\gamma}{2}. \end{eqnarray*}

Combining it with L(A) we obtain group homomorphism from SL(2,R) to SO(2,1). Explicitly

(5)   \begin{equation*}L(A')=\begin{bmatrix}  \frac{\alpha ^2-\beta ^2-\gamma ^2+\delta ^2}{2} & \alpha  \beta -\gamma  \delta  & \frac{-\alpha ^2-\beta ^2+\gamma ^2+\delta ^2}{2} \\  \alpha  \gamma -\beta  \delta  & \beta  \gamma +\alpha  \delta  & -\alpha  \gamma -\beta  \delta  \\  \frac{-\alpha ^2+\beta ^2-\gamma ^2+\delta ^2}{2} & -\alpha  \beta -\gamma  \delta  & \frac{\alpha ^2+\beta ^2+\gamma ^2+\delta ^2}{2} \end{bmatrix}.\end{equation*}

So far, so good, but now there comes real magic!

Real magic

We do not have to travel around the world, through SU(1,1) and hyperboloids. We will derive the last formula directly using a method that is similar to the one we used when deriving the map from quaternions to rotation matrices in Putting a spin on mistakes.

The Lie algebra sl(2,R) of the Lie group SL(2,R) consists of real 2\times 2 matrices of zero trace – we call them “generators”. We already met three particular generators X_1,X_2,X_3, for instance in SL(2,R) generators and vector fields on the half-plane, and I will skip this time primes as they have been denoted previously

(6)   \begin{eqnarray*} X_1&=&\begin{bmatrix}0&1\\1&0\end{bmatrix},\\ X_2&=&\begin{bmatrix}-1&0\\0&1\end{bmatrix},\\ X_3&=&\begin{bmatrix}0&-1\\1&0\end{bmatrix}. \end{eqnarray*}

Every element of sl(2,R) is a real linear combination of these three. So, X_1,X_2,X_3 can be considered as a basis of sl(2,R). For instance in SL(2,R) generators and vector fields on the half-plane we constructed a particular generator Y_+ and Y_- defined as

(7)   \begin{eqnarray*}Y_+&=&\frac{1}{2}(X_1+X_3)=\begin{bmatrix}0&0\\1&0\end{bmatrix},\\ Y_-&=&\frac{1}{2}(X_1-X_3)=\begin{bmatrix}0&1\\0&0\end{bmatrix}. \end{eqnarray*}

The Lie algebra sl(2,R) is a three-dimensional real vector space. But it is more than just a vector space. The group SL(2,R) acts on this space by what is called “the adjoint representation”. This is true for the Lie algebra of any Lie group. Here we have a particular case. Namely, if X is in sl(2,R), that is if X has trace zero, and if A is in SL(2,R), that is the determinant of A is one, then AXA^{-1} is also of zero trace (we do not need the property of determinant one for this). The map X\mapsto AXA^{-1} is a linear map. Thus we have action, let us call it \mathcal{L}, of SL(2,R) on sl(2,R):

(8)   \begin{equation*} \mathcal{L}(A)X\mapsto AXA^{-1}.\end{equation*}

Remark: I will now be skipping primes that I was using to distinguish matrices from SL(2,R) from matrices from SU(1,1).

Evidently (from associativity of matrix multiplication) we have

    \[\mathcal{L}(A_1A_2)=\mathcal{L}(A_1)\mathcal{L}(A_2).\]

Usually instead of \mathcal{L}(A) one writes \mathrm{Ad}(A) and uses the term “adjoint representation”. In short: the group acts on its Lie algebra by similarity transformations. Similarity transformation of a generator is another generators. Even more, by expanding exponential into power series we can easily find that

(9)   \begin{equation*}e^{t AXA^{-1}}=Ae^{tX}A^{-1}.\end{equation*}

So sl(2,R) is a three dimensional real vector space and SL(2,R) acts on it by linear transformations.

But that is not all. In sl(2,R) we can define a very nice scalar product (X,Y) as follows

(10)   \begin{equation*}(X,Y)=\frac{1}{2}\mathrm{tr} (XY),\end{equation*}

where \mathrm{tr} (XY) is the trace of the product of matrices X and Y.

Why is this scalar product “nice”? What is so nice about it? It is nice, because with this scalar product the transformations \mathcal{L}(A) are all isometries – they preserve this scalar product:

(11)   \begin{equation*}(\mathcal{L}(A)X,\mathcal{L}(A)Y)=(X,Y).\end{equation*}

The derivation of this last property follows from the definitions and from the property that similarity transformations do not change the trace.

So sl(2,R) is a three dimensional real vector space with a scalar product. But in sl(2,R) we have our basis X_i,\,(i=1,2,3). It is easy to calculate scalar products of the basis vectors. We get the following matrix for the matrix G with entries

(12)   \begin{equation*}G_{ij}=(X_i,X_j):\end{equation*}

(13)   \begin{equation*}G=\begin{bmatrix}1&0&0\\0&1&0\\0&0&-1\end{bmatrix}.\end{equation*}

Thus sl(2,R) has all the properties of the Minkowski space with two space and one time dimensions. The generator X_3 has “time direction”, while X_1,X_2 are “space directions”.

Once we have a basis there, we can calculate the components of the transformations \mathcal{L}(A) in this basis:

(14)   \begin{equation*}\mathcal{L}(A)X_i=\sum_{j=1}^3 \mathcal{L}(A)_{ji}X_j,\,(i=1,2,3).\end{equation*}

I used Mathematica to calculate \mathcal{L}(A)_{ji} for A=\left[\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right]. Here is the result:

(15)   \begin{equation*} \mathcal{L}(A)=\begin{bmatrix}  \frac{\alpha ^2-\beta ^2-\gamma ^2+\delta ^2}{2} & \alpha  \beta -\gamma  \delta  & \frac{-\alpha ^2-\beta ^2+\gamma ^2+\delta ^2}{2} \\  \alpha  \gamma -\beta  \delta  & \beta  \gamma +\alpha  \delta  & -\alpha  \gamma -\beta  \delta  \\  \frac{-\alpha ^2+\beta ^2-\gamma ^2+\delta ^2}{2} & -\alpha  \beta -\gamma  \delta  & \frac{\alpha ^2+\beta ^2+\gamma ^2+\delta ^2}{2} \end{bmatrix}.\end{equation*}

Here I admit that in this last formula I did copy and paste from Eq. (5) above. Because indeed that is what happened in the calculation – the result came identical. And that is the real magic. We do not need external space-time and hyperboloids. Everything is already contained in the group itself and in its Lie algebra!

Attitude matrix and quaternion path for m>1

I noticed that somehow I did not finish with the case m>1. So today, without further ado, I am posting the algorithm.
We have the body with I_1<I_2<I_3, and we are considering the case with d>1/I_2, where d is, as always, the ratio of the doubled kinetic energy to angular momentum squared.
Then we define

(1)   \begin{eqnarray*} A_1&=&\sqrt{\frac{I_1(dI_3-1)}{I_3-I_1}},\\ A_2&=&\sqrt{\frac{I_2 (1-dI_1)}{I_2-I_1}},\\ A_3&=&\sqrt{\frac{I_3 (1-dI_1)}{I_3-I_1}},\\ B&=&\sqrt{\frac{(dI_3-1)(I_2-I_1)}{I_1I_2I_3}},\\ \mu&=&\frac{1}{m}=\frac{(1-dI_1)(I_3-I_2)}{(dI_3-1)(I_2-I_1)}. \end{eqnarray*}

(2)   \begin{eqnarray*} L_1(t)&=&A_1\,\dn(Bt,\mu),\\ L_2(t)&=&A_2\,\sn(Bt,\mu),\\ L_3(t)&=&A_3\,\cn(Bt,\mu). \end{eqnarray*}

(3)   \begin{equation*} \alpha=\frac{I_3-I_1}{\sqrt{\frac{I_1(dI_3-1)(I_2-I_1)I_3}{I_2}}}, \end{equation*}

(4)   \begin{equation*} \nu=\frac{I_3-dI_1I_3}{I_1-dI_1I_3}. \end{equation*}

(5)   \begin{equation*} \psi(t)=\frac{t}I_3-\arctan\left((A_2/A_1)\mathrm{sd}(Bt,\mu)  \right)+\alpha \Pi(\nu,\am(Bt,\mu),\mu),\ \end{equation*}

(6)   \begin{equation*} Q_1(t)=\begin{bmatrix}1-\frac{L_1(t)^2}{1+L_3(t)}&-\frac{L_1(t)L_2(t)}{1+L_3(t)}&-L_1(t)\\ -\frac{L_1(t)L_2(t)}{1+L_3(t)}&1-\frac{L_2(t)^2}{1+L_3(t)}&-L_2(t)\\L_1(t)&L_2(t)&L_3(t)\end{bmatrix}. \end{equation*}

(7)   \begin{equation*} Q_2(t)=\begin{bmatrix}\cos\psi(t)&-\sin\psi(t)&0\\\sin\psi(t)&\cos\psi(t)&0\\0&0&1 \end{bmatrix}. \end{equation*}

(8)   \begin{equation*} Q(t)=Q_2(t)Q_1(t). \end{equation*}

(9)   \begin{eqnarray*} q_0(t)&=&\sqrt{\frac{1+L_3(t)}{2}}\cos\frac{\psi(t)}{2},\\ q_1(t)&=&\frac{1}{\sqrt{2(1+L_3(t))}}\left(L_2(t)\cos\frac{\psi(t)}{2}+L_1(t)\sin\frac{\psi(t)}{2}\right),\\ q_2(t)&=&\frac{1}{\sqrt{2(1+L_3(t))}}\left(L_2(t)\sin\frac{\psi(t)}{2}-L_1(t)\cos\frac{\psi(t)}{2}\right),\\ q_3(t)&=&\sqrt{\frac{1+L_3(t)}{2}}\,\sin\frac{\psi(t)}{2},\\ q(t)&=&(q_0(t),q_1(t),q_2(t),q_3(t))=q_0(t)+\mathbf{i}\,q_1(t)+\mathbf{j}\,q_1(t)+\mathbf{k}\,q_3(t). \end{eqnarray*}

I use the above formulas to draw a stereographic projection of one particular path. So, I take I_1=1,I_2=2,I_3=3,d=0.5000001, and do the parametric plot of the curve \mathbf{r}(t) in \mathbf{R}^3,
with

    \[\mathbf{r}(t)=\left(\frac{q_1(t)}{1-q_0(t)},\frac{q_2(t)}{1-q_0(t)},\frac{q_3(t)}{1-q_0(t)}\right).\]

I show below two plots. One with t\in(-1000,1000), and one with t\in(-10000,10000). For this selected value of d, the time between consecutive flips, given by the formula

(10)   \begin{equation*}\tau =4\mathrm{EllipticK}(\mu)/B= 116.472.\end{equation*}

So, for t\in(-10000,10000) we have somewhat less than 200 flips, and the lines are getting rather densely packed in certain regions.
Notice that is just one geodesic line, geometrically speaking the straightest possible line in the geometry determined by the inertial properties of the body.

It is this “geometry”that will become the main subject of the future notes.

Geodesic line for t between -1000 and 1000

Geodesic line for t between -10000 and 10000

How can such a line be “straight”? Well, it is….

Without goals everything is a propos of nothing

We are still traveling in the land of 4D butterflies made of remarkable circles such as those introduced in the note Meeting with remarkable circles three days ago. We did not travel too far during these three days, but we managed to familiarize ourselves with one particular family of these peculiar creatures in the last post Being a 4D butterfly – how does it feel? It was supposed to be an illustration of More than one path, which I illustrated with the wisdom of Miyamoto Musashi, expert Japanese swordsman and rōnin:

But, in fact, my many geodesics 4D butterfly picture of yesterday did not fit the Buddhist philosophy of many paths to the top of the mountain:

4D butterfly

What we see are many paths, all of the same type, but each leading to similar, but a different mountain. It is, perhaps, a proper illustration for the saying that:
Happiness is a journey, not a destination.

It sounds amusing, but, as you can read in Don’t think of a black cat” by Peter Wright,

Every scenario or paradigm and situation should have a goal or a collection of goals, outcomes that are worked towards. Without goals everything is a propos of nothing (as with Dirk and Deke … or maybe All I Wanna Do by Sheryl Crow).

All I Wanna DoSheryl Crow
This ain’t no country club
And it ain’t no disco
This is New York City
1, 2, 1, 2
“All I wanna do is have a little fun before I die, ”
Says the man next to me out of nowhere
It’s apropos of nothin’
He says, “His name is William”
But I’m sure he’s Bill or Billy or Mac or Buddy
And he’s plain ugly to me
And I wonder if he’s ever had a day of fun in his whole life

I promised to explain in detail how exactly the 4D butterfly is created. If only to have some fun. Yesterday I was not completely sure that my idea was good. Today I think it is OK, therefore worth sharing.

The key is the algorithm described in Meeting with remarkable circles. It produces a path q(t) in the 3D sphere in 4D space, the 3-sphere of quaternions of unit length. The function q(t) is a very particular solution of the evolution equation for a free rigid body rotating in zero-gravity space about its center of mass – like in Dzhanibekov effect. It is a very special solution, with just one flip, and also with time origin t=0 arranged so, that it is exactly in the middle of the flip, between one story of almost uniform rotation one way in the past, and another story of almost uniform rotations, another way, in the future. For our particular path we have

(1)   \begin{equation*}q(0)=(\frac{\sqrt{3}}{2},0,\frac{1}{2},0),\quad\mbox{i.e. } q(0)=\frac{\sqrt{3}}{2}+\frac{1}{2}\,\mathbf{j}.\end{equation*}

The function q(t) is a solution of the quaternion evolution equation (cf. Quaternion evolution)

(2)   \begin{equation*}\dot{q}(t)=\frac{1}{2}q(t)\widehat{\boldsymbol{\Omega} (t)},\end{equation*}

where \Omega_i(t)=L_i(t)/I_i, and \mathbf{L} is a solution of Euler’s equations:

(3)   \begin{eqnarray*} \frac{dL_1}{dt}+(\frac{1}{I_2}-\frac{1}{I_3})L_2L_3&=&0,\\ \frac{dL_2}{dt}+(\frac{1}{I_3}-\frac{1}{I_1})L_3L_1&=&0,\\ \frac{dL_3}{dt}+(\frac{1}{I_1}-\frac{1}{I_2})L_1L_2&=&0. \end{eqnarray*}

There are two simple ways in which we get, from our particular solution q(t), other solutions. The first way is to multiply q(t) by a constant unit quaternion u. If q(t) is a solution, then uq(t) is another solution. This follows by multiplying both sides of Eq. (2) by u form the left, and by noticing that u, being constant, enters under the symbol of differentiation d/dt, indicated by the dot over q.

The second method is by shifting the origin of time. For any fixed s the function t\mapsto q(t+s) is also a solution of Eq. (2) if q(t) is. Formally it follows from the rule of differentiation of a composite function (the Chain rule) and from the fact that \frac{d}{dt}(t+s)=1.

For producing the family of solutions used in the 4D butterfly image we use both methods as follows. Let
q(t,s) be defined as follows:

(4)   \begin{equation*}q(t,s)=q(0)q(s)^*q(s+t).\end{equation*}

Then, for any fixed s, the function t\mapsto q(t,s) is a solution of Eq. (2) with the property that q(0,s)=q(0). that is all trajectories have the same origin, namely q(0). In verifying this last property we use the fact, that q(s) are unit quaternions, therefore q(s)^*q(s)=1.

To produce the butterfly on the picture above I used s from s=-4 to s=4, with step 0.4. Today I went further on, with s between -10 and 10, step 0,5, to produce this image:

An interesting symmetry shows up when looking from far away, between circles that are “bridges”and circles that are “attractors”.

Some mystery is hidden there, waiting for being discovered in the future.

But, all of the above was in the spirit of “Happiness is a journey, not a destination” philosophy. What about many paths to the top of the mountain?

That would be keeping the attractor circles fixed (these are the two tops of mountain, one in the past, one in the future), and changing only the bridges connecting these limit circles.

This is, in fact, much easier. We just shift the time origin, and skip the part of multiplication from the left. Here are images produced this way.

s between -1 and 1

s between -4 and 4

s between -8 and 8

The two eternal return circles are the same, only the bridges connecting them are shifting.

This last image resembles images from more realistic Dzhanibekov’s many-flips histories. It is time now to study them… if only to have some fun.