Killing vectors, geodesics, and Noether’s theorem

Consider Lie groups of matrices: SO(3) or SO(2,1). Their double covering groups are SU(2) and SU(1,1) (or, after Cayley transform, SL(2,R)). We prefer to use these covering groups as they have simpler topologies. SU(2) is topologically a three-sphere, SL(2,R) is an open solid torus. Our discussion will be quite general, and applicable to other Lie groups as well.

We denote by Lie(G) the Lie algebra of G. It is a vector space, the set of all tangent vectors at the identity e of the group. It is also an algebra with respect to the commutator.

G acts on its Lie algebra by the adjoint representation. If X\in Lie(G) and a\in G, then

(1)   \begin{equation*} Ad_a: X\mapsto aXa^{-1}.\end{equation*}

We define the scalar product (X,Y) on Lie(G) using the trace

(2)   \begin{equation*}(X,Y)=\mbox{const}\frac{1}{2}\mbox{Re}(\mbox{Tr}(XY)).\end{equation*}


In each particular case we will choose the constant so that the formulas are simple.

Due to trace properties this scalar product is invariant with respect to the adjoint representation:

(3)   \begin{equation*}(aXa^{-1},aYa^{-1})=(X,Y).\end{equation*}

We will assume that this scalar product is indeed a scalar product, that is we assume it being non-degenerate. For SO(3) and SO(2,1) it certainly is. Lie groups with this property are called semisimple.

Let X_i be a basis in Lie(G). The structure constants C^{i}_{jk} are then defined through

(4)   \begin{equation*}[X_i,X_j]=C_{ij}^k\,X_k.\end{equation*}

We denote by \mathring{g}_{ij} the matrix of the metric tensor in the basis X_i

(5)   \begin{equation*}\mathring{g}_{ij}=(X_i,X_j).\end{equation*}

The inverse matrix is denoted \mathring{g}^{ij} so that \mathring{g}_{ij}\mathring{g}^{jk}=\delta^k_i.

For SU(2) the Lie algebra consists of anti-Hermitian 2\times 2 matrices of zero trace. For the basis we can take

(6)   \begin{equation*}X_1=\frac{1}{2}\begin{bmatrix}0&i\\i&0\end{bmatrix},\,X_2=\frac{1}{2}\begin{bmatrix}0&1\\-1&0\end{bmatrix}, \, X_3=\frac{1}{2}\begin{bmatrix}i&0\\0&-i\end{bmatrix}.\end{equation*}

For the constant \mbox{const} we chose \mbox{const}=-2. Then \mathring{g}_{ij}=\mathring{g}^{ij}=\mbox{diag}(1,1,1).

The structure constants are

(7)   \begin{equation*}C_{ij}^k=\mathring{g}^{kl}\epsilon_{ijl}.\end{equation*}

In this case, since \mathring{g}_{ij} is the identity matrix, there is no point to distinguish between lower and upper indices. But in the case of SU(1,1) it will be important.

We will now consider a general left-invariant metric on the group G. The discussion below is a continuation of the discussion in Riemannian metrics – left, right and bi-invariant.

That is we have now two scalar products on Lie(G) – the Ad-invariant scalar product with metric \mathring{g}, and another one, with metric g. We propagate the scalar products from the identity e to other points in the group using left translations (see Eq. (1) in Riemannian metrics – left, right and bi-invariant). We have a small notational problem here, because the letter g often denotes a group element, but here it also denotes the metric. Moreover, we have two scalar products and we need to distinguish between them. We will write g_a(\xi,\eta) for the scalar product with respect to the metric g of two vectors tangent at a\in G. Then left invariance means

(8)   \begin{equation*}g_a(\xi,\eta)=g_e(a^{-1}\xi,a^{-1}\eta),\end{equation*}

which implies for \xi,\eta tangent at b

(9)   \begin{equation*}g_{ab}(a\xi,a\eta)=g_b(\xi,\eta),\,a,b\in G\end{equation*}

Infinitesimal formulation of left invariance is that the vector fields \xi(a)=\xi a are “Killing vector fields for the metric” – Lie derivatives of the metric (cf. SL(2,R) Killing vector fields in coordinates, Eq.(13)) with respect to these vector fields vanish. What we need is a very important result from differential geometry: scalar products of Killing vector fields with vectors tangent to geodesics are constant along each geodesic. For the convenience of the reader we provide the definitions and a proof of the above mentioned result (a version of Noether’s theorem). Here we will assume that there are coordinates x^1,...,x^n on G. Later on we will get rid of these coordinates, but right now we will follow the standard routine of differential geometry with coordinates.

We define the Christoffel symbols of the Levi-Civita connection

(10)   \begin{equation*}\Gamma_{kl,m}=\frac{1}{2}\left(\frac{\partial g_{mk}}{\partial x^{l}}+\frac{\partial g_{ml}}{\partial x^{k}}-\frac{\partial g_{kl}}{\partial x^{m}}\right).\end{equation*}

(11)   \begin{equation*}\Gamma^{i}_{kl}=g^{im}\Gamma_{kl,m}=\frac{1}{2}g^{im}\left(\frac{\partial g_{mk}}{\partial x^{l}}+\frac{\partial g_{ml}}{\partial x^{k}}-\frac{\partial g_{kl}}{\partial x^{m}}\right).\end{equation*}

The geodesic equations are then (in Geodesics on upper half-plane factory direct we have already touched this subject)

(12)   \begin{equation*}\frac{d^2 x^i}{ds^2}=  -\Gamma^{i}_{jk}\frac{dx^j}{ds}  \frac{dx^k}{ds}.\end{equation*}

A vector field \xi is a Killing vector field for g_{ij} if the Lie derivative of g_{ij} with respect to \xi vanishes, i.e.

(13)   \begin{equation*}0=(L_\xi g)_{îj}=\xi^k\partial_k g_{ij}+g_{ik}\partial_j \xi^k+g_{jk}\partial_i\xi^k.\end{equation*}

The scalar product of the Killing vector field and the tangent vector to a geodesic is constant. That is the “conservation law”. A short proof can be found online in Sean Carroll online book “Lecture notes in General Relativity”. The discussion of the proof can be found on physics forums. But the result is a simple consequence of the definitions. What one needs is differentiating composite functions and renaming indices. Just for fun of it let us do the direct, non-elegant, brute force proof.

Suppose x^{i}(t) is a geodesic, and \xi is a Killing field. The statement is that along geodesic the scalar product is constant. That means we have to show that

    \[ g_{ij}(x(t))\,\dot{x}^{i}(t)\,\xi^{j}(x(t))=\mbox{const}.\]

We differentiate with respect to t, and we are supposed to get zero. So, let’s do it. We have derivative of a product of three terms, so we will get three terms t_1,t_2,t_3:




Let us calculate the derivatives. After we are done, in order to simplify the notation, we will skip the arguments.



    \[t_1=\partial_k\,g_{ij}\dot{x}^{i} \dot{x}^{k}(t)\,\xi^{j}(x(t)).\]

Then, from Eq. (12)



    \[t_2=-\Gamma_{kl,j}\dot{x}^k\dot{x}^l\xi^j=-\frac{1}{2}\partial_k g_{lj}\dot{x}^k\dot{x}^l\xi^j-\frac{1}{2}\partial_l g_{kj}\dot{x}^k\dot{x}^l\xi^j+\frac{1}{2}\partial_jg_{kl}\dot{x}^k\dot{x}^l\xi^j.\]

Renaming the dummy summation indices k,l we see that the two first terms of t_2 are identical, therefore

    \[t_2=-\partial_k g_{lj}\dot{x}^k\dot{x}^l\xi^j+\frac{1}{2}\partial_jg_{kl}\dot{x}^k\dot{x}^l\xi^j.\]

Again, renaming the dummy summation indices we see that the first term of t_2 cancels out with t_1, therefore


For t_3 we have


Owing to the symmetry of \dot{x}^{i}\dot{x}^k=\dot{x}^{k}\dot{x}^i, we can write it as




We rename the indices to get


But the expression in parenthesis vanishes owing to Eq. (13).

SL2R as anti de Sitter space cont.

We continue Becoming anti de Sitter.

Every matrix \Xi in the Lie algebra o(2,2) generates one-parameter group e^{\Xi t} of linear transformations of \mathbf{R}^4. Vectors tangent to orbits of this group form a vector field. Let us find the formula for the vector field generated by \Xi. The orbit through y\in \mathbf{R}^4 is

(1)   \begin{equation*}y(t)=e^{\Xi t}y.\end{equation*}

Differentiating at t=0 we find the vector field \Xi(y)

(2)   \begin{equation*}\Xi(y)=\Xi y.\end{equation*}

If \Xi is a matrix with components \Xi^{\mu}_{\phantom{\mu}\nu}, then \Xi(y) has components

(3)   \begin{equation*}\Xi^{\mu}(y)=\Xi^{\mu}_{\phantom{\mu}\nu}y^{\nu}.\end{equation*}

Vectors tangent to coordinate lines are often denoted as \partial_\mu. Therefore we can write the last formula as:

(4)   \begin{equation*}\Xi(y)=\Xi^{\mu}_{\phantom{\mu}\nu}y^{\nu}\partial_\mu.\end{equation*}

In the last post Becoming anti de Sitter we have constructed six generators \Xi_{(\mu\nu))}. Their vector fields now become

(5)   \begin{equation*}\Xi_{(1,2)}=y^2\partial_1-y^1\partial_2,\Xi_{(1,3)}=y^3\partial_1+y^1\partial_3,\Xi_{(1,4)}=y^4\partial_1+y^1\partial_4,\end{equation*}

(6)   \begin{equation*}\Xi_{(2,3)}=y^3\partial_2+y^2\partial_3,\Xi_{(2,4)}=y^4\partial_2+y^2\partial_4,\Xi_{(3,4)}=-y^4\partial_3+y^3\partial_4.\end{equation*}

Bengtsson and Sandin in their paper “Anti de Sitter space, squashed and stretched” discussed in the previous note use coordinates y^1=X,y^2=Y,y^3=U,y^4=V. Our vector field \Xi_{(1,2)} is the same as their J_{XY}, our \Xi_{(1,3)} is the same as their J_{XU} etc.

In SL(2,R) Killing vector fields in coordinates we introduced six Killing vector fields acting on the group manifold SL(2,R). How they relate to the above six generators of the group O(2,2)?

Vectors from the fields \xi_{iL},\xi_{iR} are tangent to SL(2,R). We have expressed them in coordinates of the group SL(2,R) x^1=\theta,x^2=r,x^3=u. The manifold of SL(2,R) is a hipersurface of dimension 3 in \mathbf{R}^4 endowed with coordinates y^1,y^2,y^3,y^4. What is the relation between components of the same vector in different coordinate systems? The formula is easy to derive and is very simple. If \xi^{i}, (i=1,2,3) are coordinates of the vector in SL(2,R) and \xi^{\mu},\, (\mu=1,2,3,4) are coordinates of the same vector in \mathbf{R}^4, then

(7)   \begin{equation*}\xi^\mu=\frac{\partial y^\mu}{\partial x^{i}}\xi^{i}.\end{equation*}

How y^\mu depend on x^{i}? That is simple. In SL(2,R) vector fields in coordinates we have represented each matrix A from SL(2,R) as

(8)   \begin{equation*} A=\begin{bmatrix}  r \cos (\theta )+\frac{u \sin (\theta )}{r} & \frac{\cos (\theta ) u}{r}-r \sin (\theta ) \\  \frac{\sin (\theta )}{r} & \frac{\cos (\theta )}{r}\end{bmatrix}. \end{equation*}

On the other hand, Becoming Anti-de Sitter, we represented it as

(9)   \begin{equation*}A=\begin{bmatrix} V+X & Y+U \\ Y-U & V-X \end{bmatrix}.\end{equation*}

Therefore coordinates y^\mu are easily expressed in terms of x^{i}. It remains to do the calculations. I have used computer algebra software to make these calculations for me. My Mathematica notebook doing all calculations can be downloaded from here. The result of all these calculations is the expression of vector fields \xi_{iL},\xi_{iR} in terms of the generators of O(2,2) used in the paper on anti de Sitter spaces. Here is what I have obtained:

(10)   \begin{eqnarray*} \xi_{1R}&=&-J_1=J_{XU}+J_{YV},\\ \xi_{2R}&=&J_2=J_{YU}-J_{XV},\\ \xi_{3R}&=&J_0=-J_{XY}-J_{UV},\\ \xi_{1L}&=&\tilde{J}_1=J_{YV}-J_{XU},\\ \xi_{2L}&=&\tilde{J}_2=-J_{XV}-J_{YU},\\ \xi_{3L}&=&\tilde{J}_0=J_{XY}-J_{UV}. \end{eqnarray*}

Bengtsson and Sandin introduce then their own parametrization of SL(2,R) and study the invariant metric on the group. We will find the connection between ours and their approaches in the next posts. We came to our problems starting from T-handles spinning freely in zero gravity. They are studying spinning black holes. It is interesting to see and to research similarities.

Becoming anti de Sitter

In the last post we were discussing Killing vector fields of the group SL(2,R). It was done without specifying any reason for doing it – except that it somehow came in our way naturally. But now there is an opportunity to relate our theme to something that is fashionable in theoretical physics: holographic principle and AdS/CFT correspondence

We were playing with AdS without knowing it. Here AdS stands for “anti-de-Sitter” space. Let us therefore look into the content of one pedagogical paper dealing with the subject: “Anti de Sitter space, squashed and stretched” by Ingemar Beengtsson and Patrik Sandrin . We will not be squashing and stretching – not yet. Our task is “to connect” to what other people are doing. Let us start reading Section 2 of the paper “Geodetic congruence in anti-de Sitter space“. There we read:

For the 2+1 dimensional case the definition can be reformulated in an interesting way. Anti-de Sitter space can be regarded as the group manifold of SL(2,{\bf R}), that is as the set of matrices

(1)   \begin{equation*} A = \left[ \begin{array}{cc} V+X & Y+U \\ Y-U & V-X \end{array} \right] \ , \hspace{10mm} \mbox{det}A = U^2 + V^2 - X^2 - Y^2 = 1 \ .  \end{equation*}

It is clear that every SL(2,R) matrix A=\left[\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right] can be uniquely written in the above form.

But Section 2 starts with something else:

\noindent Anti-de Sitter space is defined as a quadric surface embedded in a flat space of signature (+ \dots +--). Thus 2+1 dimensional anti-de Sitter space is defined as the hypersurface

(2)   \begin{equation*} X^2 + Y^2 - U^2 - V^2 = - 1 \end{equation*}

\noindent embedded in a 4 dimensional flat space with the metric

(3)   \begin{equation*} ds^2 = dX^2 + dY^2 - dU^2 - dV^2 \ . \end{equation*}

\noindent The Killing vectors are denoted J_{XY} = X\partial_Y - Y\partial_X,
J_{XU} = X\partial_U + U\partial_X, and so on. The topology is now
{\bf R}^2 \times {\bf S}^1, and one may wish to go to the covering
space in order to remove the closed timelike curves. Our arguments
will mostly not depend on whether this final step is taken.

For the 2+1 dimensional case the definition can be reformulated in an interesting way. Anti-de Sitter space can be regarded as the group manifold of SL(2,{\bf R}), that is as the set of matrices

(4)   \begin{equation*} g = \left[ \begin{array}{cc} V+X & Y+U \\ Y-U & V-X \end{array} \right] \ , \hspace{10mm} \mbox{det}g = U^2 + V^2 - X^2 - Y^2 = 1 \ .  \end{equation*}

\noindent The group manifold is equipped with its natural metric, which is invariant under transformations g \rightarrow g_1gg_2^{-1}, g_1, g_2 \in SL(2, {\bf R}). The Killing vectors can now be organized into two orthonormal and mutually commuting sets,

(5)   \begin{eqnarray*} & J_1 = - J_{XU} - J_{YV} \hspace{15mm} & \tilde{J}_1 =  - J_{XU} + J_{YV} \\ & J_2 = - J_{XV} + J_{YU} \hspace{15mm} & \tilde{J}_2 = - J_{XV} - J_{YU} \\ & J_0 = - J_{XY} - J_{UV} \hspace{15mm} & \tilde{J}_0 = J_{XY} - J_{UV} \ . \end{eqnarray*}

\noindent They obey

(6)   \begin{equation*} ||J_1||^2 = ||J_2||^2 = - ||J_0||^2 = 1 \ , \hspace{3mm} ||\tilde{J}_1||^2 = ||\tilde{J}_2||^2 = - ||\tilde{J}_0||^2 = 1 \ . \end{equation*}

The story here is this: 2\times 2 real matrices form a four-dimensional real vector space. We can use \alpha,\beta,\gamma,\delta or V,U,X,Y as coordinates y^1,y^2,y^3,y^4 there. The condition of being of determinant one defines a three-dimensional hypersurface in \mathbf{R}^4. We can endow \mathbf{R}^4 with scalar product determined by the matrix G defined by:

(7)   \begin{equation*}G=\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}.\end{equation*}

The scalar product is then defined as

(8)   \begin{equation*}(y,y')=y^TGy'=G_{ij}y^{i}y'^{j}=y^1y'^1+y^2y'^2-y^3y'^3-y^4y'^4.\end{equation*}

This scalar product is invariant with respect to the group SO(2,2) of 4\times 4 real matrices A satisfying:

(9)   \begin{equation*}A^TGA=G.\end{equation*}

That is, if A\in O(2,2) then (Ay,Ay')=(y,y') for all y,y' in \mathbf{R}^4.

What I will be writing now is “elementary”in the sense that “everybody in the business” knows it, and if asked will often not be able to tell where and when she/he learned it. But this is a blog, and the subject is so pretty that it would be a pity if people “not in the business” would miss it.

The equation (2) can then be written as (y,y)=-1. It determines a “generalized hyperboloid” in \mathbf{R}^4 that is invariant with respect to the action of O(2,2). Thus the situation is analogous to the one we have seen in The disk and the hyperbolic model. There we had the Poincaré disk realized as a two-dimensional hyperboloid in a three-dimensional space with signature (2,1), here we have SL(2,R) realized as a generalized hyperboloid in four-dimensional space with signature (2,2). Before it was the group O(2,1) that was acting on the hyperboloid, now is the group O(2,2). Let us look at the vector fields of the generators of this group. By differentiating Eq. (9) at group identity we find that each generator \Xi must satisfy the equation:

(10)   \begin{equation*}\Xi^TG+G\Xi=0.\end{equation*}

This equation can be also written as

(11)   \begin{equation*}(G\Xi)^T+G\Xi=0.\end{equation*}

Thus G\Xi must be antisymmetric. In n dimensions the space of antisymmetric matrices is n(n-1)/2-dimensional. For us n=4, therefore the Lie algebra so(2,2) is 6-dimensional, like the Lie algebra so(4) – they are simply related by matrix multiplication \Xi\mapsto G\Xi. We need a basis in so(2,2), so let us start with a basis in so(4). Let M_{(\mu\nu)} denote the elementary antisymmetric matrix that has 1 in row \mu, column \nu and -1 in row \nu column \mu for \mu\neq \nu, and zeros everywhere else. In a formula

    \[ (M_{(\mu\nu)})_{\alpha\beta}=\delta_{\alpha\mu}\delta_{\beta\nu}-\delta_{\alpha\nu}\delta_{\beta\mu},\]

where \delta_{\mu\nu} is the Kronecker delta symbol: \delta_{\mu\nu}=1 for \mu=\nu, and =0 for \mu\neq\nu.

As we have mentioned above, the matrices \Xi_{(\mu\nu)}=G^{-1}M_{(\mu\nu)} form then the basis in the Lie algebra so(2,2). We can list them as follows

(12)   \begin{equation*}\Xi_{(12)}=\left[\begin{smallmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&0\\0&0&0&0\end{smallmatrix}\right], \Xi_{(13)}=\left[\begin{smallmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{smallmatrix}\right], \Xi_{(14)}=\left[\begin{smallmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{smallmatrix}\right].\end{equation*}

(13)   \begin{equation*}\Xi_{(23)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&0\end{smallmatrix}\right], \Xi_{(24)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&1&0&0\end{smallmatrix}\right], \Xi_{(34)}=\left[\begin{smallmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\end{smallmatrix}\right].\end{equation*}

On the path to AdS

In the next post we will relate these generators to J_i,\tilde{J}_i from the Anti de Sitter paper by Bengtsson et al and to our Killing vector fields
\xi_{iL},\xi_{iR} from the last note