Becoming anti de Sitter

In the last post we were discussing Killing vector fields of the group SL(2,R). It was done without specifying any reason for doing it – except that it somehow came in our way naturally. But now there is an opportunity to relate our theme to something that is fashionable in theoretical physics: holographic principle and AdS/CFT correspondence

We were playing with AdS without knowing it. Here AdS stands for “anti-de-Sitter” space. Let us therefore look into the content of one pedagogical paper dealing with the subject: “Anti de Sitter space, squashed and stretched” by Ingemar Beengtsson and Patrik Sandrin . We will not be squashing and stretching – not yet. Our task is “to connect” to what other people are doing. Let us start reading Section 2 of the paper “Geodetic congruence in anti-de Sitter space“. There we read:

For the 2+1 dimensional case the definition can be reformulated in an interesting way. Anti-de Sitter space can be regarded as the group manifold of SL(2,{\bf R}), that is as the set of matrices

(1)   \begin{equation*} A = \left[ \begin{array}{cc} V+X & Y+U \\ Y-U & V-X \end{array} \right] \ , \hspace{10mm} \mbox{det}A = U^2 + V^2 - X^2 - Y^2 = 1 \ .  \end{equation*}

It is clear that every SL(2,R) matrix A=\left[\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right] can be uniquely written in the above form.

But Section 2 starts with something else:

\noindent Anti-de Sitter space is defined as a quadric surface embedded in a flat space of signature (+ \dots +--). Thus 2+1 dimensional anti-de Sitter space is defined as the hypersurface

(2)   \begin{equation*} X^2 + Y^2 - U^2 - V^2 = - 1 \end{equation*}

\noindent embedded in a 4 dimensional flat space with the metric

(3)   \begin{equation*} ds^2 = dX^2 + dY^2 - dU^2 - dV^2 \ . \end{equation*}

\noindent The Killing vectors are denoted J_{XY} = X\partial_Y - Y\partial_X,
J_{XU} = X\partial_U + U\partial_X, and so on. The topology is now
{\bf R}^2 \times {\bf S}^1, and one may wish to go to the covering
space in order to remove the closed timelike curves. Our arguments
will mostly not depend on whether this final step is taken.

For the 2+1 dimensional case the definition can be reformulated in an interesting way. Anti-de Sitter space can be regarded as the group manifold of SL(2,{\bf R}), that is as the set of matrices

(4)   \begin{equation*} g = \left[ \begin{array}{cc} V+X & Y+U \\ Y-U & V-X \end{array} \right] \ , \hspace{10mm} \mbox{det}g = U^2 + V^2 - X^2 - Y^2 = 1 \ .  \end{equation*}

\noindent The group manifold is equipped with its natural metric, which is invariant under transformations g \rightarrow g_1gg_2^{-1}, g_1, g_2 \in SL(2, {\bf R}). The Killing vectors can now be organized into two orthonormal and mutually commuting sets,

(5)   \begin{eqnarray*} & J_1 = - J_{XU} - J_{YV} \hspace{15mm} & \tilde{J}_1 =  - J_{XU} + J_{YV} \\ & J_2 = - J_{XV} + J_{YU} \hspace{15mm} & \tilde{J}_2 = - J_{XV} - J_{YU} \\ & J_0 = - J_{XY} - J_{UV} \hspace{15mm} & \tilde{J}_0 = J_{XY} - J_{UV} \ . \end{eqnarray*}

\noindent They obey

(6)   \begin{equation*} ||J_1||^2 = ||J_2||^2 = - ||J_0||^2 = 1 \ , \hspace{3mm} ||\tilde{J}_1||^2 = ||\tilde{J}_2||^2 = - ||\tilde{J}_0||^2 = 1 \ . \end{equation*}

The story here is this: 2\times 2 real matrices form a four-dimensional real vector space. We can use \alpha,\beta,\gamma,\delta or V,U,X,Y as coordinates y^1,y^2,y^3,y^4 there. The condition of being of determinant one defines a three-dimensional hypersurface in \mathbf{R}^4. We can endow \mathbf{R}^4 with scalar product determined by the matrix G defined by:

(7)   \begin{equation*}G=\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}.\end{equation*}

The scalar product is then defined as

(8)   \begin{equation*}(y,y')=y^TGy'=G_{ij}y^{i}y'^{j}=y^1y'^1+y^2y'^2-y^3y'^3-y^4y'^4.\end{equation*}

This scalar product is invariant with respect to the group SO(2,2) of 4\times 4 real matrices A satisfying:

(9)   \begin{equation*}A^TGA=G.\end{equation*}

That is, if A\in O(2,2) then (Ay,Ay')=(y,y') for all y,y' in \mathbf{R}^4.

What I will be writing now is “elementary”in the sense that “everybody in the business” knows it, and if asked will often not be able to tell where and when she/he learned it. But this is a blog, and the subject is so pretty that it would be a pity if people “not in the business” would miss it.

The equation (2) can then be written as (y,y)=-1. It determines a “generalized hyperboloid” in \mathbf{R}^4 that is invariant with respect to the action of O(2,2). Thus the situation is analogous to the one we have seen in The disk and the hyperbolic model. There we had the Poincaré disk realized as a two-dimensional hyperboloid in a three-dimensional space with signature (2,1), here we have SL(2,R) realized as a generalized hyperboloid in four-dimensional space with signature (2,2). Before it was the group O(2,1) that was acting on the hyperboloid, now is the group O(2,2). Let us look at the vector fields of the generators of this group. By differentiating Eq. (9) at group identity we find that each generator \Xi must satisfy the equation:

(10)   \begin{equation*}\Xi^TG+G\Xi=0.\end{equation*}

This equation can be also written as

(11)   \begin{equation*}(G\Xi)^T+G\Xi=0.\end{equation*}

Thus G\Xi must be antisymmetric. In n dimensions the space of antisymmetric matrices is n(n-1)/2-dimensional. For us n=4, therefore the Lie algebra so(2,2) is 6-dimensional, like the Lie algebra so(4) – they are simply related by matrix multiplication \Xi\mapsto G\Xi. We need a basis in so(2,2), so let us start with a basis in so(4). Let M_{(\mu\nu)} denote the elementary antisymmetric matrix that has 1 in row \mu, column \nu and -1 in row \nu column \mu for \mu\neq \nu, and zeros everywhere else. In a formula

    \[ (M_{(\mu\nu)})_{\alpha\beta}=\delta_{\alpha\mu}\delta_{\beta\nu}-\delta_{\alpha\nu}\delta_{\beta\mu},\]

where \delta_{\mu\nu} is the Kronecker delta symbol: \delta_{\mu\nu}=1 for \mu=\nu, and =0 for \mu\neq\nu.

As we have mentioned above, the matrices \Xi_{(\mu\nu)}=G^{-1}M_{(\mu\nu)} form then the basis in the Lie algebra so(2,2). We can list them as follows

(12)   \begin{equation*}\Xi_{(12)}=\left[\begin{smallmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&0\\0&0&0&0\end{smallmatrix}\right], \Xi_{(13)}=\left[\begin{smallmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{smallmatrix}\right], \Xi_{(14)}=\left[\begin{smallmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{smallmatrix}\right].\end{equation*}

(13)   \begin{equation*}\Xi_{(23)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&0\end{smallmatrix}\right], \Xi_{(24)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&1&0&0\end{smallmatrix}\right], \Xi_{(34)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\end{smallmatrix}\right].\end{equation*}

On the path to AdS

In the next post we will relate these generators to J_i,\tilde{J}_i from the Anti de Sitter paper by Bengtsson et al and to our Killing vector fields
\xi_{iL},\xi_{iR} from the last note

Riemannian metrics – left, right and bi-invariant

The discussion in this post applies to Riemannian metrics on Lie groups in general, but we will concentrate
on just one case in hand: SL(2,R). Let G be a Lie group. Vectors tangent to paths in G, at identity e\in G form the Lie algebra of the group. Usually it is denoted by \mathrm{Lie}(G). That is a real linear space, endowed with the commutator. For matrix groups the Lie algebra is a space of matrices (for the group SL(2,R) its Lie algebra sl(2,R) consists of matrices of zero trace), and the commutator is realized as the commutator of matrices: [A,B]=AB-BA.

Suppose we have scalar product (\xi,\eta)_e defined on the Lie algebra. For instance, if \xi,\eta are matrices, we may try to define (\xi,\eta)_e=\mathrm{Tr}(\xi\eta). That is a possible natural definition, the scalar product so defined is automatically symmetric. But it is not always nondegenerate, so one needs to be careful. When we have scalar product at identity, we can use group multiplication to propagate it to the whole group space by left translations:

(1)   \begin{equation*}(\xi,\eta)_g=(g^{-1}\xi,g^{-1}\eta)_e,\end{equation*}

if \xi,\eta are tangents to paths at g.

Notice that we are multiplying tangent vectors by group elements. For matrix groups that is easy, we simply multiply matrices. For more general groups is is understood that if \xi is tangent to the path \gamma(t), then g\xi is tangent to g\gamma(t).

The scalar product defined everywhere by Eq. (1) has automatically the property of being left-invariant:

(2)   \begin{equation*}(g\xi,g\eta)=(\xi,\eta).\end{equation*}

The above equation needs explanation, its meaning follows from the context. On the left hand side we have scalar product. So, it is probably calculated at a certain point. We need to give this point some name. The symbol g is used in the formula itself for the shift, so let us choose h. So, we specify the left hand side to mean


Now, if g\xi is tangent at h, then \xi itself must be tangent at g^{-1}h. So, the complete formula should be:

(3)   \begin{equation*}(g\xi,g\eta)_h=(\xi,\eta)_{g^{-1}h}.\end{equation*}

That is supposed to hold for any g,h in the group. How do we prove it? We use the definition (1). The left hand side becomes

(4)   \begin{equation*}(g\xi,g\eta)_h=(h^{-1}(g\xi),h^{-1}(g\eta))_e.\end{equation*}

The right hand side becomes

(5)   \begin{equation*}(\xi,\eta)_{g^{-1}h}=((g^{-1}h)^{-1}\xi,(g^{-1}h)^{-1}\eta)_e.\end{equation*}

The left and right hand sides equal because of the associativity of multiplication: (g^{-1}h)^{-1}\xi=h^{-1}(g\xi).

In a similar way we can propagate the scalar product from the Lie algebra to the whole group by right shifts. The so obtained scalar product (aka Riemannian metric) is then right-invariant. But these two Riemannian metrics in general will be different. Let us check under which conditions they would coincide? In order to coincide we would have to have:

    \[(g^{-1}\xi,g^{-1}\eta)_e=(\xi g^{-1},\eta g^{-1})_e,\]

for all \xi,\eta tangent at g. Denoting \xi'=g^{-1}\xi,\eta'=g^{-1}\eta, we would have to have


for all \xi',\eta' in the Lie algebra. In other words: the scalar product at the identity would have to be Ad-invariant. We recall that “Ad” denotes the adjoint representation of the group on its Lie algebra defined as

    \[ Ad_g:\xi\mapsto g\xi g^{-1}.\]

The metric we have defined using trace


has this property because of the general property of the trace \mathrm{Tr}(AB)=\mathrm{Tr}(BA). But we could easily start with a different scalar product. Each scalar product give rise to different geodesics, different curvature. The group endowed with bi-invariant metric is, so to say, maximally “round”.
When metric is left (or right) invariant, that means that the group universe looks the same way from every point – it is “homogeneous”. But when it is also bi-invariant, that means that at every point it looks maximally the same in all directions – that is it has its natural “isotropy” property.

Riemannian metric on SL(2,R)

Every Lie group is like a Universe. Now it is time for us to play with the cosmology of SL(2,R). In Real magic – space-time in Lie algebra we have already started this game – we have defined a natural “metric” on the Lie algebra sl(2,R). We have defined scalar product of any two vectors tangent to the group at the group identity. And we have seen that this scalar product is similar to that of Minkowski space-time, except that with only two space (and one time) dimensions.

Riemannian metric, on the other hand, is defined when we have scalar product of tangent vectors at every point. In recent posts we were playing with Riemannian metric on the upper half-plane, which is a homogeneous space for the group, of only two dimensions – the cross section of the torus. Now we want to define Riemannian metric on the whole torus – an interesting extension.

We already have the metric at one point – at the group identity. Can we extend this definition to the whole torus, and do it in a natural way?

Of course we can. Because we are on the group. Here it is how it is being done. Suppose we have two vectors tangent at some group point g. I am using the letter g now, but g here is another notation for a real matrix A of determinant one, an element of SL(2,R). That is we have two paths \gamma_1(t),\gamma_2(t) with \gamma_1(0)=\gamma_2(0)=g. Our two tangent vectors, say \xi_1,\xi_2, are vectors tangent to \gamma_1(t) and \gamma_2(t) at g, they are represented by matrices

(1)   \begin{equation*}\xi_i=\frac{d\gamma_i(t)}{dt}|_{t=0}, \quad (i=1,2).\end{equation*}

The matrices \xi_1,\xi_2 are not in the Lie algebra. For instance, in our case, they will not be (in general) of trace zero. That is because at t=0 the two paths are not at the identity. But we can shift them to the identity. The paths g^{-1}\gamma_i(t) at t=0 are at the identity. Thus even if \xi_i are not in the Lie algebra, g^{-1}\xi_i are. We can use this fact and define the scalar product at g as follows:

(2)   \begin{equation*}(\xi_1,\xi_2)_g=(g^{-1}\xi_1,g^{-1}\xi_2)_e.\end{equation*}

I am using the symbol e rather than the unit matrix I to denote the group identity here, because the construction above is quite general, is being used for any Lie group, not just for SL(2,R).

Thus once we have scalar product defined in the Lie algebra, we have it defined everywhere.

Of course one can ask: why not use another definition, with right shifts? What is wrong with

(3)   \begin{equation*}(\xi_1,\xi_2)_g=(\xi_1g^{-1},\xi_2g^{-1})_e.\end{equation*}

Nothing is wrong. The scalar product at the identity that we have defined in Real magic – space-time in Lie algebra has the property of invariance expressed there in Eq. (11). It is a homework to show that using this property we can prove that the two definitions above lead to the same Riemannian metric!