Becoming anti de Sitter

In the last post we were discussing Killing vector fields of the group SL(2,R). It was done without specifying any reason for doing it – except that it somehow came in our way naturally. But now there is an opportunity to relate our theme to something that is fashionable in theoretical physics: holographic principle and AdS/CFT correspondence
[latexpage]
We were playing with AdS without knowing it. Here AdS stands for “anti-de-Sitter” space. Let us therefore look into the content of one pedagogical paper dealing with the subject: “Anti de Sitter space, squashed and stretched” by Ingemar Beengtsson and Patrik Sandrin . We will not be squashing and stretching – not yet. Our task is “to connect” to what other people are doing. Let us start reading Section 2 of the paper “Geodetic congruence in anti-de Sitter space“. There we read:

For the 2+1 dimensional case the definition can be reformulated in an interesting way. Anti-de Sitter space can be regarded as the group manifold of $SL(2,{\bf R})$, that is as the set of matrices

A = \left[ \begin{array}{cc} V+X & Y+U \\ Y-U & V-X \end{array}
\right] \ , \hspace{10mm} \mbox{det}A = U^2 + V^2 – X^2 – Y^2 = 1
\ . \label{eq:A}

It is clear that every SL(2,R) matrix $A=\left[\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right]$ can be uniquely written in the above form.

But Section 2 starts with something else:

\noindent Anti-de Sitter space is defined as a quadric surface embedded in a flat space of signature $(+ \dots +–)$. Thus 2+1 dimensional anti-de Sitter space is defined as the hypersurface

$$X^2 + Y^2 – U^2 – V^2 = – 1 \label{eq:22h}$$

\noindent embedded in a 4 dimensional flat space with the metric

$$ds^2 = dX^2 + dY^2 – dU^2 – dV^2 \ .$$

\noindent The Killing vectors are denoted $J_{XY} = X\partial_Y – Y\partial_X$,
$J_{XU} = X\partial_U + U\partial_X$, and so on. The topology is now
${\bf R}^2 \times {\bf S}^1$, and one may wish to go to the covering
space in order to remove the closed timelike curves. Our arguments
will mostly not depend on whether this final step is taken.

For the 2+1 dimensional case the definition can be reformulated in an interesting way. Anti-de Sitter space can be regarded as the group manifold of $SL(2,{\bf R})$, that is as the set of matrices

g = \left[ \begin{array}{cc} V+X & Y+U \\ Y-U & V-X \end{array}
\right] \ , \hspace{10mm} \mbox{det}g = U^2 + V^2 – X^2 – Y^2 = 1
\ . \label{gg}

\noindent The group manifold is equipped with its natural metric, which is invariant under transformations $g \rightarrow g_1gg_2^{-1}$, $g_1, g_2 \in SL(2, {\bf R})$. The Killing vectors can now be organized into two orthonormal and mutually commuting sets,

\begin{eqnarray} & J_1 = – J_{XU} – J_{YV} \hspace{15mm}
& \tilde{J}_1 = – J_{XU} + J_{YV} \\
& J_2 = – J_{XV} + J_{YU} \hspace{15mm}
& \tilde{J}_2 = – J_{XV} – J_{YU} \\
& J_0 = – J_{XY} – J_{UV} \hspace{15mm}
& \tilde{J}_0 = J_{XY} – J_{UV} \ . \end{eqnarray}

\noindent They obey

||J_1||^2 = ||J_2||^2 = – ||J_0||^2 = 1 \ , \hspace{3mm}
||\tilde{J}_1||^2 = ||\tilde{J}_2||^2 = – ||\tilde{J}_0||^2 = 1 \ .

The story here is this: $2\times 2$ real matrices form a four-dimensional real vector space. We can use $\alpha,\beta,\gamma,\delta$ or $V,U,X,Y$ as coordinates $y^1,y^2,y^3,y^4$ there. The condition of being of determinant one defines a three-dimensional hypersurface in $\mathbf{R}^4.$ We can endow $\mathbf{R}^4$ with scalar product determined by the matrix $G$ defined by:
$$G=\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}.$$
The scalar product is then defined as
$$(y,y’)=y^TGy’=G_{ij}y^{i}y’^{j}=y^1y’^1+y^2y’^2-y^3y’^3-y^4y’^4.$$
This scalar product is invariant with respect to the group SO(2,2) of $4\times 4$ real matrices $A$ satisfying:
$$A^TGA=G.\label{eq:atga}$$
That is, if $A\in O(2,2)$ then $(Ay,Ay’)=(y,y’)$ for all $y,y’$ in $\mathbf{R}^4.$

What I will be writing now is “elementary”in the sense that “everybody in the business” knows it, and if asked will often not be able to tell where and when she/he learned it. But this is a blog, and the subject is so pretty that it would be a pity if people “not in the business” would miss it.

The equation (\ref{eq:22h}) can then be written as $(y,y)=-1.$ It determines a “generalized hyperboloid” in $\mathbf{R}^4$ that is invariant with respect to the action of O(2,2). Thus the situation is analogous to the one we have seen in The disk and the hyperbolic model. There we had the Poincaré disk realized as a two-dimensional hyperboloid in a three-dimensional space with signature (2,1), here we have SL(2,R) realized as a generalized hyperboloid in four-dimensional space with signature (2,2). Before it was the group O(2,1) that was acting on the hyperboloid, now is the group O(2,2). Let us look at the vector fields of the generators of this group. By differentiating Eq. (\ref{eq:atga}) at group identity we find that each generator $\Xi$ must satisfy the equation:
$$\Xi^TG+G\Xi=0.$$
This equation can be also written as
$$(G\Xi)^T+G\Xi=0.$$
Thus $G\Xi$ must be antisymmetric. In $n$ dimensions the space of antisymmetric matrices is $n(n-1)/2$-dimensional. For us $n=4,$ therefore the Lie algebra so(2,2) is 6-dimensional, like the Lie algebra so(4) – they are simply related by matrix multiplication $\Xi\mapsto G\Xi.$ We need a basis in so(2,2), so let us start with a basis in so(4). Let $M_{(\mu\nu)}$ denote the elementary antisymmetric matrix that has $1$ in row $\mu$, column $\nu$ and $-1$ in row $\nu$ column $\mu$ for $\mu\neq \nu$, and zeros everywhere else. In a formula $(M_{(\mu\nu)})_{\alpha\beta}=\delta_{\alpha\mu}\delta_{\beta\nu}-\delta_{\alpha\nu}\delta_{\beta\mu},$
where $\delta_{\mu\nu}$ is the Kronecker delta symbol: $\delta_{\mu\nu}=1$ for $\mu=\nu,$ and $=0$ for $\mu\neq\nu.$

As we have mentioned above, the matrices $\Xi_{(\mu\nu)}=G^{-1}M_{(\mu\nu)}$ form then the basis in the Lie algebra so(2,2). We can list them as follows
\Xi_{(12)}=\left[\begin{smallmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&0\\0&0&0&0\end{smallmatrix}\right],
\Xi_{(13)}=\left[\begin{smallmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{smallmatrix}\right],
\Xi_{(14)}=\left[\begin{smallmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{smallmatrix}\right].
\Xi_{(23)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&0\end{smallmatrix}\right],
\Xi_{(24)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&1&0&0\end{smallmatrix}\right],
\Xi_{(34)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\end{smallmatrix}\right].

In the next post we will relate these generators to $J_i,\tilde{J}_i$ from the Anti de Sitter paper by Bengtsson et al and to our Killing vector fields
$\xi_{iL},\xi_{iR}$ from the last note

SL(2,R) Killing vector fields in coordinates

In Parametrization of SL(2,R) we introduced global coordinates [latexpage] $x^1=\theta,x^2=r,x^3=u$ on the group SL(2,R). Any matrix $A$ in SL(2,R) can be uniquely written as

A(\theta,r,u)=\begin{bmatrix}
r \cos (\theta )+\frac{u \sin (\theta )}{r} & \frac{\cos (\theta ) u}{r}-r \sin (\theta ) \\
\frac{\sin (\theta )}{r} & \frac{\cos (\theta )}{r}\end{bmatrix}.

If $A$ is the matrix with components $A_{ij},$ then its coordinates can be expressed as functions of the matrix components as follows

\begin{eqnarray}
x^1(A)&=&\mathrm{atan2}(A_{21},A_{22}),\\
x^2(A)&=&\frac{1}{\sqrt{A_{21}^2+A_{22}^2}},\\
x^3(A)&=&\frac{A_{11}A_{21}+A_{12}A_{22}}{A_{21}^2+A_{22}^2}.
\end{eqnarray}
The function $\mathrm{atan2}(y,x)$ returns the angle $\theta$, $0\leq\theta<2\pi$ of the complex number $x+iy=\sqrt{x^2+y^2}e^{i\theta}.$ Once we have coordinates, it is easy to calculate components of tangent vectors to any given path $A(t)$ - they are given by derivatives of the coordinates $x^{i}(A(t)).$ We will calculate now the vector fields resulting from left and right actions of one-parameter subgroups of SL(2,R) that we have already met in SL(2,R) generators and vector fields on the half-plane

We have introduced there the one-parameter groups, that we will denote now as $U_1,U_2,U_3$, generated by $X_1,X_2,X_3$:

\begin{eqnarray}
X_1&=&\begin{bmatrix}0&1\\1&0\end{bmatrix},\\
X_2&=&\begin{bmatrix}-1&0\\0&1\end{bmatrix},\\
X_3&=&\begin{bmatrix}0&-1\\1&0\end{bmatrix}.
\label{eq:x123}\end{eqnarray}

We can take exponentials of the generators and construct one-parameter subgroups

\begin{eqnarray}
U_1(t)&=&\exp tX_1=\begin{bmatrix}\cosh t&\sinh t\\ \sinh t&\cosh t\end{bmatrix},\\
U_2(t)&=&\exp tX_2=\begin{bmatrix}\exp -t&0\\ 0&\exp t\end{bmatrix},\\
U_3(t)&=&\exp tX_3=\begin{bmatrix}\cos t&-\sin t\\ \sin t&\cos t\end{bmatrix}.
\end{eqnarray}

In SL(2,R) generators and vector fields on the half-plane we were acting with these transformations on the upper half-plane using fractional linear representation. Now we will be acting on SL(2,R) itself via group multiplication, either left or right.

Let us start with $U_1$ acting from the left. At $A$ we have the trajectory $U_1(t)A$. Its coordinates are $x^{i}(t)=x^{i}(U_1(t)A).$ Let us denote by $\xi_{1,L}$ the tangent vector field, with components $\xi_{1,L}^{i}, \,(i=1,2,3)$ Then
$$\xi_{1,L}^{i}=\frac{dx^{i}(U_1(t)A)}{dt}|_{t=0}.$$

We can calculate it easily using algebra software. The result is:

$$\xi_{1,L}=(r^2,-ur,1+r^4-u^2),$$

The same way we get

$$\xi_{2,L}=(0,-r,-2u),$$
$$\xi_{3,L}=(0,-ur,-1+r^4-u^2),$$

Then we can calculate vector fields of right shifts

$$\xi_{j,R}^{i}=\frac{dx^{i}(AU_j(t))}{dt}|_{t=0},$$

to obtain:
$$\xi_{1,R}=(\cos (2 \theta ),-r \sin (2 \theta ),2 r^2 \cos (2 \theta )),$$
$$\xi_{2,R}=(-2 \sin (\theta ) \cos (\theta ),-r \cos (2 \theta ),-4 r^2 \sin (\theta ) \cos (\theta )),$$
$$\xi_{3,R}=(1,0,0),$$

We know that our metric is bi-invariant. That means the vector fields of the left and right shifts generate one-parameter group of isometries. They are called Killing fields of the metric. In differential geometry one shows that a vector field $\xi$ is a Killing vector field for metric $g$ if and only if the Lie derivative $L_\xi$ of the metric vanishes. Lie derivative of any symmetric tensor $T_{ab}$ is defined as

(L_\xi T)_{ab}=\xi^c\frac{\partial T_{ab}}{\partial x^c}+\frac{\partial \xi^c}{\partial x^{a}}T_{cb}+\frac{\partial \xi^c}{\partial x^{b}}T_{ac}.

Using any computer algebra software it is easy to verify that Lie derivatives of our metric with respect to all six vector fields $\xi_{j,L},\xi_{j,R}$ indeed vanish. Mathematica notebook verifying this property can be downloaded here.

Riemannian metrics – left, right and bi-invariant

The discussion in this post applies to Riemannian metrics on Lie groups in general, but we will concentrate
on just one case in hand: SL(2,R).[latexpage] Let $G$ be a Lie group. Vectors tangent to paths in $G,$ at identity $e\in G$ form the Lie algebra of the group. Usually it is denoted by $\mathrm{Lie}(G).$ That is a real linear space, endowed with the commutator. For matrix groups the Lie algebra is a space of matrices (for the group SL(2,R) its Lie algebra sl(2,R) consists of matrices of zero trace), and the commutator is realized as the commutator of matrices: $[A,B]=AB-BA.$

Suppose we have scalar product $(\xi,\eta)_e$ defined on the Lie algebra. For instance, if $\xi,\eta$ are matrices, we may try to define $(\xi,\eta)_e=\mathrm{Tr}(\xi\eta)$. That is a possible natural definition, the scalar product so defined is automatically symmetric. But it is not always nondegenerate, so one needs to be careful. When we have scalar product at identity, we can use group multiplication to propagate it to the whole group space by left translations:
$$(\xi,\eta)_g=(g^{-1}\xi,g^{-1}\eta)_e,\label{eq:sc11}$$
if $\xi,\eta$ are tangents to paths at $g$.

Notice that we are multiplying tangent vectors by group elements. For matrix groups that is easy, we simply multiply matrices. For more general groups is is understood that if $\xi$ is tangent to the path $\gamma(t)$, then $g\xi$ is tangent to $g\gamma(t).$

The scalar product defined everywhere by Eq. (\ref{eq:sc11}) has automatically the property of being left-invariant:

$$(g\xi,g\eta)=(\xi,\eta).$$

The above equation needs explanation, its meaning follows from the context. On the left hand side we have scalar product. So, it is probably calculated at a certain point. We need to give this point some name. The symbol $g$ is used in the formula itself for the shift, so let us choose $h$. So, we specify the left hand side to mean
$(g\xi,g\eta)_h.$

Now, if $g\xi$ is tangent at $h$, then $\xi$ itself must be tangent at $g^{-1}h.$ So, the complete formula should be:

$$(g\xi,g\eta)_h=(\xi,\eta)_{g^{-1}h}.$$

That is supposed to hold for any $g,h$ in the group. How do we prove it? We use the definition (\ref{eq:sc11}). The left hand side becomes
$$(g\xi,g\eta)_h=(h^{-1}(g\xi),h^{-1}(g\eta))_e.$$
The right hand side becomes

$$(\xi,\eta)_{g^{-1}h}=((g^{-1}h)^{-1}\xi,(g^{-1}h)^{-1}\eta)_e.$$

The left and right hand sides equal because of the associativity of multiplication: $(g^{-1}h)^{-1}\xi=h^{-1}(g\xi).$

In a similar way we can propagate the scalar product from the Lie algebra to the whole group by right shifts. The so obtained scalar product (aka Riemannian metric) is then right-invariant. But these two Riemannian metrics in general will be different. Let us check under which conditions they would coincide? In order to coincide we would have to have:
$(g^{-1}\xi,g^{-1}\eta)_e=(\xi g^{-1},\eta g^{-1})_e,$
for all $\xi,\eta$ tangent at $g.$ Denoting $\xi’=g^{-1}\xi,\eta’=g^{-1}\eta,$ we would have to have
$(\xi’,\eta’)_e=(g\xi’g^{-1},g\eta’g^{-1})_e$
for all $\xi’,\eta’$ in the Lie algebra. In other words: the scalar product at the identity would have to be Ad-invariant. We recall that “Ad” denotes the adjoint representation of the group on its Lie algebra defined as
$Ad_g:\xi\mapsto g\xi g^{-1}.$

The metric we have defined using trace

$(\xi,\eta)_e=\frac{1}{2}\mathrm{Tr}(\xi\eta)$

has this property because of the general property of the trace $\mathrm{Tr}(AB)=\mathrm{Tr}(BA)$. But we could easily start with a different scalar product. Each scalar product give rise to different geodesics, different curvature. The group endowed with bi-invariant metric is, so to say, maximally “round”.
When metric is left (or right) invariant, that means that the group universe looks the same way from every point – it is “homogeneous”. But when it is also bi-invariant, that means that at every point it looks maximally the same in all directions – that is it has its natural “isotropy” property.