Periods of Jacobi elliptic functions – Part 1

This one is another purely technical post. The point is that I am not very happy with all the existing expositions about Jacobi elliptic functions. I want to organize it my way which, I am sure, is better than all other ways. Of course what is better and what not depends on the interests of the particular person and on the purpose of the study. I am not completely sure what my purpose is, but somehow I am navigating and, at some point, I will be able to shout: Tierra! Tierra! But not yet. So I continue.

First: a summary of the essential properties of the elliptic functions \sn,\cn,\dn.

In fact the properties below are sufficient to completely define these elliptic functions. This is how these functions are defined in an old but good (and freely avilable) monograph by Dixon, The elementary properties of the elliptic functions, with examples, 1894.

Definition of elliptic functions \sn,\cn,\dn:

(1)   \begin{equation*} \frac{d}{du}\,\sn\, u=\cn\, u\,\dn\, u,\end{equation*}

(2)   \begin{equation*} \cn^2 u+\sn^2 u=1,\end{equation*}

(3)   \begin{equation*} \dn^2\,u+k^2\,\sn^2\,u=1.\end{equation*}

(4)   \begin{equation*}\sn\, 0=0,\quad \cn\,0=\dn\,0=1.\end{equation*}

Quite often the argument m is omitted, it simplifies the notation. Sometimes we use m sometimes k. For instance in the formulas above \sn u should be written as \sn(u,m), but some people will write it as \sn(u,k), where k^2=m. After some little training there should be no confusion.

Previously, we have also found the addition formula for \sn(u+v). We could also prove the addition formulas for \cn(u+v) and \dn(u+v), but I will not do it. I am just copying, for instance Eqs (48)-(50) in Jacobi Elliptic Functions, on Wolfram’s site

Addition formulas for Jacobi elliptic functions:

(5)   \begin{eqnarray*} \sn (u+v)&=&\frac{\sn\,u\,\cn\,v\,\dn\,v+\sn\,v\,\cn\,u\,\dn\,u}{1-k^2\,\sn^2 u\,\sn^2 v},\\ \cn (u+v)&=&\frac{\cn\,u\,\cn\,v-\sn\,u\,\sn\,v\,\dn\,u\,\dn\,v}{1-k^2\,\sn^2 u\,\sn^2 v},\\ \dn (u+v)&=&\frac{\dn\,u\,\dn\,v-k^2\sn\,u\,\sn\,v\,\cn\,u\,\cn\,v}{1-k^2\,\sn^2 u\,\sn^2 v}. \end{eqnarray*}

We have also defined F(\phi,m)

(6)   \begin{equation*}F(\phi,m)=\int_0^\phi\frac{d\theta}{\sqrt{1-m\,\sin^2\theta}},\end{equation*}

and, for m\leq 1, we have defined the function \am (u,m) as the inverse function of F(\phi,m): If u=\int_0^\phi\frac{d\theta}{\sqrt{1-m\,\sin^2\theta}}, then \am (u,m)=\phi. Moreover

(7)   \begin{equation*}\sn (u,m)=\sin \am (u,m),\quad \cn (u,m)=\cos \am(u,m).\end{equation*}

The function F(\phi,m) is called “the incomplete elliptic integral of the first kind“. From the shape of the function \sin^2\theta

it is clear that

    \[F(2\pi,m)=2F(\pi,m)=4F(\pi/2,m).\]

The value F(\pi/2,m) is called the complete elliptic integral of the first kind, and denoted as K

(8)   \begin{equation*}K=K(m)=K(k)=F(\pi/2,m)=\int_0^{\pi/2}\frac{d\theta}{\sqrt{1-k^2\,\sin^2\theta}}.\end{equation*}

Thus we can write \am\, K=\pi/2, and therefore \sn\, K=\sin \pi/2=1,\, \cn\,K=\cos\pi/2=0. This way we can build the following table

where

    \[k'=\sqrt{1-k^2}.\]

Then from addition formulas we obtain:

The case of m>1

When m>1 the expression 1-m\sin^2\theta under the square root in the integrand of the integral (8) defining K(m) goes through the negative values.  Thus K(m), for m>1 is, in general, a complex number. During the discussion under the last post Period of a pendulum, the following conjecture has appeared:

Conjecture

Assuming k>1,\, m= k^2, the following identity holds:

(9)   \begin{equation*}K(m)=\frac{K(\frac{1}{m})-i\,K(1-\frac{1}{m})}{k}.\end{equation*}

I do not have a rigorous proof of that conjecture. What I have is a plot, using Mathematica, of the numerical difference between the left and the right hand side of Eq. (9). Here it is:

As you can see it looks like a solid zero! One can also verify that both, the real and the imaginary part of the right hand side, as a function of k satisfy the differential equation that Km) is supposed to satisfy – Eq. (19) in Complete Elliptic Integral of the First Kind:

But that is not yet a complete proof. Nevertheless I will assume that the above conjecture is true. To proof that the real part of K(m) is right is, in fact, easy. Here is how I do it: Let \phi_1 be the angle \phi_1<\pi/2, for which m\sin^2\phi_1=1. Then the real part of K(m) is given by

    \[Re(K(m))=\int_0^{\phi_1}\frac{d\theta}{\sqrt{1-k^2\sin^2\theta}}.\]

Introduce \tau defined by k\sin\theta=\sin\tau. Then \tau(\phi_1)=\pi/2 and k\cos\theta d\theta=\cos\tau d\tau, thus d\theta=\frac{\cos\tau d\tau}{k \cos\theta}. Moreover \sqrt{1-k^2\sin^2\theta}=\sqrt{1-\sin^2\tau}=\cos\tau. Since \cos\theta=\sqrt{1-\sin^2\theta}=\sqrt{1-\frac{1}{m}\sin^2\tau}, we obtain

(10)   \begin{equation*}Re(K(m))=\frac{1}{k}\int_0^{\pi/2}\frac{d\tau}{\sqrt{1-\frac{1}{m}\sin^2\tau}}=K(1/m)/k.\end{equation*}

I suspect that the proof for the imaginary part should not be much more complicated, but I did not succeeded doing it yet. Therefore at this point I will make a break. Till tomorrow.

Update

The conjectured formula is well known. In the tables by I. S. Gradshteyn, I. M. Ryzhik we find

Here K'(k)=K(k').
There is a condition Im(k)>0, but there is no such restriction in the source that is quoted – MO 131. Some has added this condition later, for security reasons. We extend it to the case of Im(k)=0. Then, solving for K(k) we get our formula.

In fact I can almost get this formula by changes of integration variables, except that one has to be careful what to do with the square root of a negative number. A convention is needed. The sign of the imaginary part that I am getting depends on this convention.

Update 2, Sunday Jan. 29

I decided to share my derivation for the imaginary part. It is a continuation of the real part derivation at the end of the post. The imaginary part is given by the integral

    \[Im(K(m))i=\int_{\phi_1}^{\pi/2}\frac{d\theta}{\sqrt{1-k^2\sin^2\theta}}.\]

Here the expression under the square root is negative all the time. So we write

    \[\sqrt{1-k^2\sin^2\theta}=\sqrt{-1(k^2\sin^2\theta-1)}=i\sqrt{k^2\sin^2\theta-1}\]

and therefore (since 1/i=-i)

    \[Im(K(m))i=-i\int_{\phi_1}^{\pi/2}\frac{d\theta}{\sqrt{k^2\sin^2\theta-1}}=\frac{-i}{k}\int_{\phi_1}^{\pi/2}\frac{d\theta}{\sqrt{\sin^2\theta-\frac{1}{m}}}.\]

Now we change the integration variable \theta by setting \theta=\tau+\pi/2. Then \sin^2\theta=\cos^2\tau=1-\sin^2\tau and so

    \[Im(K(m))=\frac{-1}{k}\int_{-\tau_1}^{0}\frac{d\tau}{\sqrt{1-\frac{1}{m}-\sin^2\tau}}=\frac{-1}{k}\int_{0}^{\tau_1}\frac{d\tau}{\sqrt{1-\frac{1}{m}-\sin^2\tau}},\]

where \tau_1 is the positive value of \tau such that \sin\tau=k_1, where

    \[k_1=\sqrt{1-\frac{1}{m}}\]

Now we do the same trick that we did in the real case. We introduce \psi so that

    \[\sin\tau/k_1=\sin\psi\]

This is possible since the integration is within the range of \tau where \sin\tau/k_1\leq 1. Then \cos\tau d\tau=k_1\cos\psi d\psi, so

    \[d\tau=\frac{k_1\cos\psi d\psi}{\cos\tau},\]

and

    \[\sqrt{k_1^2-\sin^2\tau}=k_1\sqrt{1-\sin^2\psi}=k_1\cos\psi\,\]

    \[\cos\tau=\sqrt{1-\sin^2\tau}=\sqrt{1-k_1^2\sin^2\psi.\]

So, finally

(11)   \begin{equation*}Im(K(m))=\frac{-1}{k}\int_0^{\pi/2}\frac{d\psi}{\sqrt{1-k_1^2\sin^2\psi}}=\frac{-1}{k}K(1-1/m).\end{equation*}

Here and there I was skipping explanation and justifications, but these small cracks in the proof can be easily filled in.
The conjecture appears to have been finally proved. So, it is not a conjecture any more, it is a proven property! Uff!

Period of a pendulum

Commenting my statement that “we can now return to the imaginary time period of the pendulum” in my last post Elliptic addition theorem, Bjab aked “What about the real time period?”

And indeed, we did not quite finish the case of the real period. Let me recall what it is about in general terms.

He was seventeen and bored listening to the Mass being celebrated in the cathedral of Pisa. Looking for some object to arrest his attention, the young medical student began to focus on a chandelier high above his head, hanging from a long, thin chain, swinging gently to and fro in the spring breeze. How long does it take for the oscillations to repeat themselves, he wondered, timing them with his pulse. To his astonishment, he found that the lamp took as many pulse beats to complete a swing when hardly moving at all as when the wind made it sway more widely. The name of the perceptive young man, destined to make other momentous scientific discoveries, was Galileo Galilei.

Though Galileo discovered the isochronism of the pendulum as a fact of nature, he did not offer an underlying reason for his seminal observation. That explanation had to wait for the great work of Isaac Newton.

(From “Galileo’s Pendulum: From the Rhythm of Time to the Making of Matter“, by Roger. G. Newton

Later it was discovered that the isochronism of the pendulum is not the fact of nature. Nevertheless, quoting from “Thus Spoke Galileo: The great scientist’s ideas and their relevance to the present day” by Andrea Frova and Mariapiera Marenzana:

It is interesting to note that Galileo’s mistake in treating this isochronism as perfect led to some important scientific and technological advances. Without this error, for example, perhaps no one would ever have thought of using the pendulum as a device for measuring time.


After this introduction let us do some simple math. In fact, we have already started to discuss the problem in “Nonlinear pendulum period and Kozyrev’s mirrors“. We have noticed that for the mathematical pendulum with the swinging amplitude \alpha smaller than \pi, we have the formula:

    \[T(m)=\frac{4K(1/m)}{\omega},\]

where m=k^2 and

    \[K(m)=F(\pi/2,m)=\int_0^{\pi/2}\frac{d\theta}{\sqrt{1-m\sin^2\theta}}.\]

The “classical period” from the schoolbooks, one that is a good approximation for small oscillations is

    \[ T_0=\frac{2\pi}{\omega},\]

where

    \[\omega =\sqrt{\frac{g}{l}.\]

In the discussion under that post we have also found the relation between the value of k and the amplitude of the oscillations \alpha

    \[\frac{1}{k}=\sin\frac{\alpha}{2}.\]

Therefore we can write down the formula telling us how the period of the pendulum depends on the amplitude:

(1)   \begin{equation*}T(\alpha)=T_0\, C(\alpha),\end{equation*}

where

(2)   \begin{equation*}C(\alpha)=\frac{2K(\sin^2\frac{\alpha}{2})}{\pi}.\end{equation*}

Here is the plot of the function C(\alpha)

We see that it is almost constant and equal to 1 for \alpha<\pi/2. At \alpha=\pi/2 we have, in fact:

    \[C(\pi/2)=1.18034,\]

that is about 18% larger than T_0. Then the deviation from the constancy of the period becomes larger and larger, and dramatically larger near \alpha=\pi. But, of course, experiments with bobs hanging on strings were impossible to conduct for \alpha>\pi/2.

What about experiments with \alpha<\pi/2? How their results fit the theory? What should we tell to students asking such questions? A review can be found in the paper “An accurate formula for the period of a simple pendulum oscillating beyond the small-angle regime” by F. M. S. Lima and P. Arun. They propose a simple approximation to the function C(\alpha), and approximation based on the logarithm function:

    \[C(\alpha)\approx  -\frac{\log \cos \frac{\alpha}{2}}{1-\cos\frac{\alpha}{2}}\]

For comparison I plotted both, where the approximation is in red:

Here is the comparison with experiments, taken from the paper:

Now we are ready to move into imaginary time. In the next post.

Elliptic addition theorem

Yesterday was a day with no apparent progress. Some days are diamonds (living spiraling force) some days are stones (elliptic) – I wrote. And indeed, it was elliptic stone. But “apparently no progress” is not the same as “no progress whatsoever”. Yesterday was one of these days that Thomas Edison would classify as finding yet another way that will not work

I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.”

Attributed to Thomas Edison

And today, for change, is a diamond. Well, perhaps not a real diamond, but zirconia at least.

After this introduction I go to math right away. Let me recall what my problem was: I was unable to solve one of the exercises in the notes by John Baez . As we learn from Wikipedia

is an American mathematical physicist and a professor of mathematics at the University of California, Riverside (UCR) in Riverside, California.
….
His physicist uncle, Albert Baez (inventor of the X-ray microscope and father of singer and progressive activist Joan Baez), interested him in physics as a child


I do not know that exactly is “progressive activist”, but that is not the problem. The problem was that John Baez gave some hints how to prove that Jacobi elliptic sn function is periodic not only with real, but also with imaginary period. The suggested trick involved using gravity that attracts to the sky instead of attracting to the Earth, as we know it. But I could not find how to use the trick even after finding the exercise solved in the textbook on elliptic functions by Armitage and Eberlein – my previous post Some days are diamonds (living spiraling force) some days are stones (elliptic) for details. And indeed my problem was with the details. While I had a general idea about what to do, when it came to justifying rigorously every step – here I have found an insurmountable hole.

This morning however I woke up with an idea. And lo and behold – it worked. So we have a progress. Let me explain.

Suppose we want to prove that the function \sin(x) defined, say, by power series \sin(x) = x-x^3/3!+... is periodic. How to do it? One possible way is by first proving the formula known from the school:

    \[\sin(x+y)=\sin x \cos y + \cos x \sin y.\]

Suppose we know that the above addition formula is true. Suppose additionally we know that \sin \pi=0 and \cos \pi=-1. The setting y=\pi in the addition formula we get

    \[\sin(x+\pi)=-\sin x,\]

and thus

    \[\sin(x+2\pi)=\sin x.\]

So, we have the period. But first we need the addition formula. Why not to apply the same method to \mathrm{sn}(u,m)? We need addition formula. This can be found online, Eq. (48) in Jacobi Elliptic Functions. It reads:

(1)   \begin{equation*} \mathrm{sn} (u+v,m)=\frac{\mathrm{sn}(u,m)\mathrm{cn}(v,m)\mathrm{dn}(v,m)+\mathrm{sn}(v,m)\mathrm{cn}(u,m)\mathrm{dn}(u,m)}{1-m\,\mathrm{sn}^2(u,m)\,\mathrm{sn}^2(v,m)}. \end{equation*}

But how to prove it?

First I thought: I will simply follow Euler’s method that takes just a page in a little old book that I have: F. Bowman, “Introduction to Elliptic Functions with Applications”. BUt after a while I realized that this method, invented by Euler, needs a certain amount of intelligence, it is using some tricks. There must be a simpler method, a “no-brain method”. Once we know what we want to prove, we can ask the computer to do it for us! Then it will take no space and no time at all. You press “Enter”, and the all the work is done. And this is how I did it.

Below are the details, so that, if you would like to learn how to do such things, follow me!

Let us denote the right hand side of Eq. (??), thought of as a function of v with m and u fixed, by f(v). We want to prove that f(v)=\mathrm{sn}(u+v,m). But how is the function \mathrm(sn) defined? In Derivatives of Jacobi elliptic am, sn, cn, dn, we have derived the formula

    \[\frac{d}{du}\mathrm{sn}(u,m)=\matherm{cn}(u,m)\mathrm{dn}(u,m).\]

Taking square of both sides and using

    \[\mathrm{cn}^2(u,m)=1-\mathrm{sn}^2(u,m)\quad \mathrm{dn}^2(u,m)=1-m\mathrm{sn}^2(u,m),\]

we have

(2)   \begin{equation*} \left(\frac{d}{du}\mathrm{sn}(u,m)\right)^2=(1-\mathrm{sn}^2(u,m))(1-m\mathrm{sn}^2(u,m)). \end{equation*}

This last differential equation, together with the initial value \mathrm{sn}(0,m)=0, determines the function \mathrm{sn}(u,m) uniquely (up to a sign, but sign is not a problem for us).
So, we need to prove that our function f(v) satisfies the differential equation:

(3)   \begin{equation*} \left(\frac{d}{dv}f(v)\right)^2=(1-f^2(v))(1-m\,f^2(v)). \end{equation*}

That f(0)=\mathrm(sn)(u,m) is obvious form the definition and the fact that \mathrm{sn}(0,m)=0,\, \mathrm{cn}(0,m)=\mathrm{dn}(0,m)=1. Since we do have the formulas for derivatives of elliptic functions, we can let the computer to verify Eq. (??). And it is easy. You need REDDUCE program. It is a free computer algebra program and it can run almost on any platform. And it is only 50 MB!
Then we need to tell reduce what to do. Here is my code:

OPERATOR cn,sn,dn,h;
h(v):=(snu*cn(v)*dn(v)+sn(v)*cnu*dnu)/(1-m*snu^2*sn(v)^2);
dh:=(DF(h(v),v));
For all v let DF(sn(v),v)=cn(v)*dn(v);
For all v let DF(cn(v),v)=-sn(v)*dn(v);
For all v let DF(dn(v),v)=-m*sn(v)*cn(v);
dh:=dh;
w:=dh^2-(1-m*h(v)^2)*(1-h(v)^2);
w;
for all v let dn(v)^2=(1-m*sn(v)^2);
w;
for all v let cn(v)^2=1-sn(v)^2;
let dnu^2=1-m*snu^2;
let cnu^2=1-snu^2;
w;
wn:=NUM(w);
wn:=wn;
OFF NAT;
OUT “H:\Reduce MyFiles\resadd.txt”;
wn;
SHUT “H:\Reduce MyFiles\resadd.txt”;
END;

You can replace “C:\resadd.txt” with whatever you want as the output file. You save the code above as a text file somewhere on your hard drive. The I have saved it as “add1.red” on my drive H in a subdirectory. Then you start REDUCE. It opens a window. Here is what I have typed in this window:

You change the path, so that it is to your file. You type what you see above, and press ENTER. It takes less than a second and the output is written. You can look at it and it should tell you the same as my says, namely “0”.

And that is the end of the proof. QED. Quod erat demonstrandum. The diamond day.

A cigar, but no mirrors.

But we can now return to the imaginary time period of the pendulum.

And so it is. We know today that Galileo was wrong. His assertion was false. Which did not prevent him from becoming famous. He has discovered “an approximate truth”, that is, strictly speaking, “a lie”. Life is complicated.

As it is said:

According to Homer, the philosophy of the ancient world was that there was a third element that linked the opposing elements. Between the body and the soul, there is the spirit. Between life and death there is the transformation that is possible to the individual, between father and mother there is the child who takes the characteristics of both father and mother, and between good and evil there is the SPECIFIC SITUATION that determines which is which and what ought to be done.