Chiromancy in the rotation group

When we travel by a car, we decide which path to take. Choosing the right path may be crucial. Sometimes we choose a path that is straight, but nevertheless is scary. Like this road in Nebraska:

Sometimes the path is just “unusual”:

And sometimes we will think twice before choosing it:

We are interested in a particular kind of traveling: we want to travel in the rotation group. Think of an airplane. When it flies, it of course moves from a location to a location. But that is not what is of interest for us now. During the flight the plane turns and tilts, and all its turns and tilts can be (and often are) recorded. There are essentially three parameters that need to be recorded:

Usually they are referred to as roll, pitch and yaw. When you throw a stone in the air, and if you endow it with three axes, its roll, pitch, and yaw will also change with time. As there are three parameters, they can be represented by a point moving in a three-dimensional space. Instead of roll, pitch and yaw we could choose Euler angles. But, in fact, we will choose still another way of representing a history of tilts and turns of a rigid object as a path. We will use stereographic projection from the 3-dimensional sphere to the 3-dimensional Euclidean space.

You may ask: why to do it? What is the point? What can we learn from it? And my answer is: for fun.
One can also ask: what is the point with inspecting your hand? And yet there is the whole art of chiromancy, or palmistry. From Wikipedia:

Chiromancy consists of the practice of evaluating a person’s character or future life by “reading” the palm of that person’s hand. Various “lines” (“heart line”, “life line”, etc.) and “mounts” (or bumps) (chirognomy) purportedly suggest interpretations by their relative sizes, qualities, and intersections. In some traditions, readers also examine characteristics of the fingers, fingernails, fingerprints, and palmar skin patterns (dermatoglyphics), skin texture and color, shape of the palm, and flexibility of the hand.

We are going to learn about the lines in the rotation group. Some will be short, some will be long. We will learn how read from them about the body that recorded these lines during its flight.

From the internet we can learn:

Which Hand to Read
Before reading your palm, you should choose the right hand to read. There are different schools of thought on this matter. Some people think the right for female and left for male. As a matter of fact, both of your hands play great importance in hand reading. But one is dominant and the other is passive. The left hand usually represents what you were born with physically and materially and the right hand represents what you become after grown up. So, the right hand is dominant in palm reading and the left for supplement.

The same is with rotation groups. Which one to choose? We have rotations represented naturally as 3\times 3 orthogonal matrices, but we can also represent rotations by quaternions or by unitary matrices from the group \mathrm{SU}(2). Both play a great importance. We will choose unit quaternions or, equivalently, \mathrm{SU}(2), they seem to be primary!

Let us recall from Putting a spin on mistakes that every 2\times 2 unitary matrix U of determinant one is necessarily of the form

(1)   \begin{equation*}U=\begin{bmatrix}W+iZ&iX-Y\\iX+Y&W-iZ}\end{bmatrix}.\end{equation*}

and it determines the rotation matrix R(U):

(2)   \begin{equation*} R(U)=\begin{bmatrix} W^2+X^2-Y^2-Z^2&2(XY-WZ)&2(WY+XZ)\\ 2(WZ+XY&W^2-X^2+Y^2-Z^2&2(YZ-WX)\\ 2(XZ-WY)&2(WX+YZ)&W^2-X^2-Y^2+Z^2 \end{bmatrix}, \end{equation*}

where W,X,Y,Z are real numbers representing a point on the three-dimensional unit sphere S^3 in the 4-dimenional Euclidean space \mathbf{R}^4, that is

(3)   \begin{equation*}W^2+X^2+Y^2+Z^2=1.\end{equation*}

The three-sphere S^3 is “homogeneous”, the same as the well-known two-sphere, like a table tennis ball.

And yet I want to paint one particular point on our three-sphere, namely the point represented by numbers W=1,X=Y=Z=0. It follows from Eq. (2) that the rotation matrix corresponding to this particular point is the identity matrix:

    \[I=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}.\]

I am not going to hide the fact that also the opposite point, W=-1,X=Y=Z=0, on our 3D ball also produces the identity rotation matrix, that is, in ordinary language: no rotation at all.

We are going now to go back to the stereographic projection that has been introduced already in Dzhanibekov effect – Part 2.

The idea is as in the picture below:

except that our sphere is 3-dimensional, and our plane onto which we are projecting is also 3-dimensional. Our North Pole is now the point W=1,X=Y=Z=0, our South Pole the point W=-1,X=Y=Z=0, and our “equator”, that is the set of all points on S^3 for which W=0 is now a 2-sphere rather than a circle like on the picture above. The projection formulas read (see Dzhanibekov effect – Part 2):

(4)   \begin{eqnarray*} x=\frac{X}{1-W},\\ y=\frac{Y}{1-W},\\ z=\frac{X}{1-W}. \end{eqnarray*}

The inverse transformation is given by

(5)    \begin{eqnarray*} W&=&\frac{r^2-1}{r^2+1},\\ X&=&\frac{2x}{r^2+1},\\ Y&=&\frac{2y}{r^2+1},\\ Z&=&\frac{2z}{r^2+1}, \end{eqnarray*}

where r^2=x^2+y^2+z^2.

All the southern hemisphere (points on S^3 with W<0 are projected inside the unit ball in \mathbf{R}^3! The “equator”, where the three-sphere intersects the three-plane, is projected onto the unit sphere in \mathbf{R}^3. The North Pole has no image, it is the only point on S^3 that has no image. In fact, it has an image, but at “infinity”. We will not need it. The South Pole, which, as we know, represents the trivial rotation, is mapped into the origin of \mathbf{R}^3, the point with x=y=z=0.
We will continue this course of rotational palmistry in the next posts.

Putting a spin on mistakes

I am reading Leopold Infeld, “QUEST: The Evolution of a Scientist“, Readers Union Limited, London 1942. There is a little piece there about errors that drew my attention. The story takes place in Princeton:

The next day I met Robertson in Fine Hall and told him :
” I am convinced now that gravitational waves do not exist. I believe I am able to show it in a very brief way.” Robertson was still sceptical :
” I don’t believe you,” and he suggested a more detailed dis­cussion. He took the two pages on which I had written my proof and read it through.
“The idea is O.K. There must be some trivial mistake in your calculations.”
He began quickly and efficiently to check all the steps of my argument, even the most simple ones, comparing the results on the blackboard with those in my notes. The beginning checked beauti­fully. I marvelled at the quickness and sureness with which Robertson performed all the computations. Then, near the end, there was a small discrepancy. He got plus where I got minus. We checked and rechecked the
point; Robertson was right. At one place I had made a most trivial mistake, in spite of having repeated the calculations three times ! Such a mistake must have as simple an explanation as a Freudian slip in writing or language. Subconsciously I did not believe in the result and my first doubts were still there. But I wanted to prove it in a simpler way than had Einstein. Thus I had to cheat myself. Such mistakes happen often, but they are usually caught before papers are printed. The author comes back. many times to his work. before he sees it. in print and has, therefore, plenty of opportunity to gain a more detached attitude and to apply the criteria of logic which were repressed during the emotional strain of creation.
Thus my whole result went to pieces. Although it was a very small matter, I felt very downcast at the moment, as does every scientist in similar circumstances even though he has accustomed himself to disappointment. He feels the collapse of hope, the death of the creation of his brain. It is like the succession of birth and death.
Robertson tried to console me :
” It happens to everyone. The most trivial mistakes are always most difficult to detect.”

Interesting! But who is this Robertson?

From Wikipedia:

Howard Percy “Bob” Robertson (January 27, 1903 – August 26, 1961) was an American mathematician and physicist known for contributions related to physical cosmology and the uncertainty principle. He was Professor of Mathematical Physics at the California Institute of Technology and Princeton University.

After the war Robertson was director of the Weapons System Evaluation Group in the Office of the Secretary of Defense from 1950 to 1952, chairman of the Robertson Panel on UFO’s in 1953 and Scientific Advisor to the NATO Supreme Allied Commander Europe (SACEUR) in 1954 and 1955. He was Chairman of the Defense Science Board from 1956 to 1961, and a member of the President’s Science Advisory Committee (PSAC) from 1957 to 1961.

… The Robertson Panel recommended that a public education campaign should be undertaken in order to reduce public interest in the subject, minimising the risk of swamping Air Defence systems with reports at critical times, and that civilian UFO groups should be monitored.
….
A number of criticisms have been leveled at the Robertson panel. In particular that the panel’s study of the phenomena was relatively perfunctory and its conclusions largely predetermined by the earlier CIA review of the UFO situation.

Robertson Panel was a device used by the CIA to establish a cover program (Blue Book) that would draw attention away from a covert program designed to meet the UFO challenge.


Richard M. Dolan, Jacques F. Vallee,
UFOs and the National Security State: Chronology of a Coverup, 1941-1973, Hampton Roads 2002

So, it seems that H. P. Robertson knew not only how to avoid mistakes, but he also knew how to mislead others.

As for me, I am very good in the mistakes making business. My ideas are most of the time correct, but when it comes to their implementation, I am making a lot of mistakes. At every step. That is probably because I am impatient. So, when I want to get some result, I am going as quickly as I can to the end – just to see that the method is working, even if the end result is wrong. And the method is working if it produces result, because there are many methods that lead to no result at all. And when I have a method that is producing results, then I start working on removing mistakes. Usually I check the end result using several different methods. It is rather improbable that the same mistake will show up when you use different methods.
But then when I have everything working, I need to write it down, in all details. And that is pain. That is where I am making more mistakes.
Why is it so? It must be something more than the lack of patience. It is not a Freudian slip, I am sure, though sometimes it may be the case….

Anyway it is time now to go back to calculations, where making mistakes is so easy.

Let me recall from the previous post Pauli, rotations and quaternions.

We consider the group \mathrm{SU}(2) of complex 2\times 2 unitary matrices of determinant 1. We have Hermitian traceless matrices s_1,s_2,s_3 given by Eq. (1) in Pauli, rotations and quaternions. To each vector \vec{v}=(v_1,v_2,v_3) we associate Hermitian matrix
\vec{v}\cdot\vec{s}, as in Eq. (10) in the previous post. We have orthogonal transformation R defined by

    \[(R\vec{v})\cdot\vec{s}=U(\vec{v}\cdot\vec{s})\,U^*.\]

We know that R is orthogonal. We would like to know that \det R=1, and that is where we left in the last note.
One way of proving that R has determinant 1 is to express R in terms of the elements of the matrix U. We know (see the previous post) that U has the form

(1)   \begin{equation*}\begin{bmatrix}a&-\bar{c}\\c&\bar{a},\end{bmatrix}\end{equation*}

where |a|^2+|c|^2=1. It is now a straightforward calculation to express R in terms of a and c. But the calculation is long, and it is easy to make a mistake. That is why I used REDUCE , an excellent free computer algebra software (and then double checked with Mathematica). The REDUCE file doing the calculation is here. And here is the end result:

(2)   \begin{align*} R_{11}&=\frac{a^2+\bar{a}^2-c^2-\bar{c}^2}{2},\\ R_{12}&=i\,\frac{a^2-\bar{a}^2-c^2+\bar{c}^2}{2},\\ R_{13}&=a\bar{c}+\bar{a}c,\\ R_{21}&=i\,\frac{\bar{a}^2-a^2+\bar{c}^2-c^2}{2},\\ R_{22}&= \frac{a^2+\bar{a}^2+c^2+\bar{c}^2}{2},\\ R_{23}&=i\,(\bar{a}c-a\bar{c}),\\ R_{31}&=-ac-\bar{a}\bar{c},\\ R_{32}&=i\,(\bar{a}\bar{c}-ac),\\ R_{33}&=a\bar{a}-c\bar{c}. \end{align*}

Or in terms of quaternionic components W,X,Y,Z as in Eq. (9) of the previous post:

(3)   \begin{equation*} \begin{bmatrix} W^2+X^2-Y^2-Z^2&2(XY-WZ)&2(WY+XZ)\\ 2(WZ+XY)&W^2-X^2+Y^2-Z^2&2(YZ-WX)\\ 2(XZ-WY)&2(WX+YZ)&W^2-X^2-Y^2+Z^2 \end{bmatrix}. \end{equation*}

The REDUCE file producing the above matrix is here.
Then I used REDUCE again to verify that R defined as above is indeed unitary and has determinant one. The REDUCE file is here here.

It takes a while to write the code and to debug it, but then producing the result is almost instant.

There is one more property that we need to verify. We have the map U\mapsto R=R(U), that maps \mathrm{SU}(2) matrices into orthogonal matrices of determinant 1. But we would like to be sure that every rotation R can be obtained in this way. How to do it? One way would be to calculate a and c from the matrix R using the formulas above. But that would not be easy. We have four real numbers and nine quadratic equations. There is a smarter and more useful method. Here is how we do it: suppose that we know that every rotation R, that is every orthogonal matrix of determinant 1, is a rotation about some axis by some angle (and it is true). That is we can represent R as

    \[ R=\exp(\theta W(\vec{k})),\]

where \vec{k} is a unit vector. That was discussed in the note Spin – we know that we do not know.
From the Rodrigues rotation formula  discussed in this note we can calculate R explicitly.
On the other hand we can calculate explicitly U=\exp(i\frac{\theta}{2}\vec{k}\cdot \vec{s}) and then R(U). It is then the question of a straightforward calculation to verify that we get our R:

(4)   \begin{equation*}\exp(\theta W(\vec{k}))=R\left(\exp(i\frac{\theta}{2}\vec{k}\cdot \vec{s})\right).\end{equation*}

The calculation is simple but long. REDUCE code that does it is here.

Pauli, rotations and quaternions

In Nobody understands quantum mechanics, but spin is fun we met three Pauli matrices \sigma_1,\sigma_2,\sigma_3.. They are kinda OK, but they are not quite fitting our purpose. Therefore I will replace them with another set of matrices, and I will call these matrices s_1,s_2,s_3. They are defined the same way as Pauli matrices, except that s_2 has a different sign:

(1)   \begin{equation*}s_1=\begin{bmatrix}0&1\\1&0\end{bmatrix},\quad s_2=\begin{bmatrix}0&i\\-i&0\end{bmatrix},\quad s_3=\begin{bmatrix}1&0\\0&-1\end{bmatrix}.\end{equation*}

We have

(2)   \begin{equation*} (s_i)^*=s_i,\quad (i=1,2,3),\end{equation*}

(3)   \begin{equation*}(s_1)^2=(s_2)^2=(s_3)^2=I,\end{equation*}

(4)   \begin{equation*}s_1s_2=-is_3,\quad s_2s_3=-is_1,\quad s_3s_1=-is_2.\end{equation*}

The Pauli’s matrices \sigma_i would have in Eq. (4) plus signs on the right. We want minus signs there. Why? Soon we will see why. But, perhaps, I should mention it now, that when Pauli is around many things go not quite right. That is the famous “Pauli effect“. From Wikipedia article Pauli Effect:

The Pauli effect is a term referring to the apparently mysterious, anecdotal failure of technical equipment in the presence of Austrian theoretical physicist Wolfgang Pauli. The term was coined using his name after numerous instances in which demonstrations involving equipment suffered technical problems only when he was present.

One example:

In 1934, Pauli saw a failure of his car during a honeymoon tour with his second wife as proof of a real Pauli effect since it occurred without an obvious external cause.

You can find more examples in the Wikipedia article. The car breakdown during a honeymoon with Pauli’s second wife is somehow related to the change of the sign of the second matrix above. And now it is easy to relate our second set of matrices to quaternions. It was Sir William Hamilton who introduced quaternions in 1843. There were three imaginary units \mathbf{i},\mathbf{j},\mathbf{k} satisfying:

(5)   \begin{equation*}\mathbf{i}^2=\mathbf{j}^2=\mathbf{k}^2=-1,\end{equation*}

(6)   \begin{equation*}\mathbf{i}\mathbf{j}=\mathbf{k},\quad \mathbf{j}\mathbf{k}=\mathbf{i},\quad \mathbf{k}\mathbf{i}=\mathbf{j}.\end{equation*}


Hamilton, when he invented quaternions, he thought of them as of abstract objects obeying simple algebra rules. But now we can realize them as complex matrices. To this end it is enough to define

(7)   \begin{equation*} \mathbf{i}=is_1=\begin{bmatrix}0&i\\i&0\end{bmatrix}, \quad \mathbf{j}=is_2=\begin{bmatrix}0&-1\\1&0\end{bmatrix},\quad \mathbf{k}=is_3=\begin{bmatrix}i&0\\0&-i\end{bmatrix},\end{equation*}

and notice that the algebraic relations defining the multiplication table of quaternions are automatically satisfied! If we would have used Pauli matrices \sigma, we would have to use -i in the above equations – which is not so nice. You would have asked: why minus rather than plus?
Some care is needed, however. One should distinguish the bold \mathbf{i} – the first unit quaternion, from the imaginary complex number i. One should also note that the squares of the three imaginary quaternionic units, in their matrix realization, are -I, that is “minus identity matrix”, instead of just number -1, as in Hamilton’s definition.
In the matrix realization we also have:

(8)   \begin{equation*}\mathbf{i}^*=-\mathbf{i},\quad \mathbf{j}^*=-\mathbf{j},\quad \mathbf{k}^*=-\mathbf{k},\end{equation*}

where the star {}^* denotes Hermitian conjugation.
A general quaternion q is a sum:

    \[q= W+X\mathbf{i}+Y\mathbf{j}+Z\mathbf{k}.\]

In matrix realization it is represented by the matrix

(9)   \begin{equation*}Q=W I+X\mathbf{i}+Y\mathbf{j}+Z\mathbf{k}=\begin{bmatrix}W+iZ&iX-Y\\iX+Y&W-iZ}\end{bmatrix}.\end{equation*}

The conjugated quaternion, q^*= W-X\mathbf{i}-Y\mathbf{j}-Z\mathbf{k} is represented by Hermitian conjugated matrix. Notice that qq^*=q^* q=W^2+X^2+Y^2+Z^2. Unit quaternions are quaternions that have the property that qq^*=1. They are represented by matrices Q such that QQ^*=I, that is by unitary matrices. Moreover we can check that then

    \[\det Q=\det \begin{bmatrix}W+iZ&iX-Y\\iX+Y&W-iZ}\end{bmatrix}=W^2+X^2+Y^2+Z^2=1.\]

Therefore matrices representing unit quaternions are unitary of determinant one. Such matrices form a group, the special unitary group \mathrm{SU}(2). Yet, to dot the i’s and cross the t’s, we still need to show that every 2\times 2 unitary matrix of determinant one is of the form (9). Let therefore U be such a matrix

    \[ U=\begin{bmatrix}a&b\\c&d\end{bmatrix}.\]

Since \det U=ad-bc=1, we can easily check that

    \[\begin{bmatrix}a&b\\c&d\end{bmatrix} \begin{bmatrix}d&-b\\-c&a\end{bmatrix}=I.\]

Therefore

    \[ U^{-1}=\begin{bmatrix}d&-b\\-c&a\end{bmatrix}.\]

On the other hand U is unitary, that means U^*=U^{-1}. But

    \[ U^*=\begin{bmatrix}\bar{a}&\bar{c}\\\bar{b}&\bar{d}\end{bmatrix}.\]

Therefore we must have

    \[\begin{bmatrix}d&-b\\-c&a\end{bmatrix}=\begin{bmatrix}\bar{a}&\bar{c}\\\bar{b}&\bar{d}\end{bmatrix}.\]

That is d=\bar{a},b=-\bar{c}. Writing a=W+iZ, c=Y+iX we get the form (9).

We are interested in rotations in our 3D space. Quantum mechanics suggests using matrices from the group \mathrm{SU}(2). We now know that they are unit quaternions in disguise. But, on the other hand, perhaps quaternions are matrices from \mathrm{SU}(2) in disguise? When Hamilton was inventing quaternions he did not know that he was inventing something that will be useful for rotations. He simply wanted to invent “the algebra of space”. Maxwell first tried to apply quaternions in electrodynamics (for div and curl operators), but that was pretty soon abandoned. Mathematicians at that time were complaining that (see Addendum to the Life of Sir William Rowan Hamilton, LL.D, D.C.L. On Sir W. R. Hamilton’s Irish Descent. On the Calculus of Quaternions. Robert Perceval Graves Dublin: Hodges Figgis, and Co., 1891. ):

As a whole the method is pronounced by most mathematicians to be neither easy nor attractive, the interpretation being hazy or metaphysical and seldom clear and precise.’

Nowadays quaternions are best understood in the framework of Clifford algebras. Unit quaternions are just one example of what are called “spin groups”. We do not need such a general framework here. We will be quite happy with just quaternions and the group \mathrm{SU}(2). But we still have to relate the matrices U from \mathrm{SU}(2) to rotations, and to learn how to use them. Perhaps I will mention it already here, at this point, that the idea is to consider the rotations as points of a certain space. We want to learn about geometry of this space, and we want to investigate different curves in this space. Some of these curves describe rotations and flips of an asymmetric spinning top. Can they be interpreted as “free fall” in this space under some force of gravitation, when gravitation is related to inertia? Perhaps this way we will be able to get a glimpse into otherwise mysterious connection of gravity to quantum mechanics? These questions may sound like somewhat hazy and metaphysical. Therefore let us turn to a good old algebra.

With every vector \vec{v} in our 3D space we can associate Hermitian matrix \vec{v}\cdot\vec{s} of trace zero:

(10)   \begin{equation*}\vec{v}\cdot\vec{s}=v_1s_1+v_2s_2+v_3s_3=\begin{bmatrix}v_3&v_1+iv_2\\v_1-iv_2&-v_3\end{bmatrix}.\end{equation*}

In fact, as it is very easy to see, every Hermitian matrix of trace zero is of this form. Notice that

(11)   \begin{equation*}-\det \vec{v}\cdot\vec{s}=(v_1)^2+(v_2)^2+(v_3)^2=||v||^2.\end{equation*}

Let U be a unitary matrix. We act with unitary matrices on vectors using the “jaw operation” – we take the matrix representation \vec{v}\cdot\vec{s} of the vector in the jaws:

    \[ \vec{v}\cdot\vec{s}\mapsto U\vec{v}\cdot\vec{s}\, U^*.\]

Now, since \vec{v}\cdot\vec{s} is Hermitian and U is unitary, U\vec{v}\cdot\vec{s}\, U^* is also Hermitian. Since \vec{v}\cdot\vec{s} is of trace zero and U is unitary, U\vec{v}\cdot\vec{s}\, U^* is also of trace zero. Therefore there exists vector \tilde{\vec{v}} =(\tilde{v}_1,\tilde{v}_2,\tilde{v}_3) such that:

    \[\tilde{ \vec{v}}\cdot\vec{s}=U\vec{v}\cdot\vec{s}\,U^*.\]

If U\in\mathrm{SU}(2), then \det{U}=\det(U^*)=1. Therefore we must have ||\tilde{\vec{v}}||=||\vec{v}||. In other words the transformation (which is linear) \vec{v}\mapsto \tilde{\vec{v}} is an isometry – it preserves the length of all vectors. And it is known that every isometry (that maps 0 into 0, which is the case) is a rotation (see Wikipedia, Isometry). Therefore there exists a unique rotation matrix R such that \tilde{\vec{v}}=R\vec{v}. This way we have a map from \mathrm{SU}(2) to O(3) – the group of orthogonal matrices. We do not know yet if \det(R)=1, which is the case, but it needs a proof, some proof….