There’s music in the primes… (part I)

OK, now a new section at physicsnapkins, in which I will discuss a bit about my own research… Recently, Germán Sierra and I have submitted to the ArXiv a paper about the Riemann hypothesis, which you can see here. To be honest, the real expert in the field is Germán, my contribution is mostly technical. Anyway, I’ll try to convey here the basic ideas of the story… We’ll give a walk around the concepts, assuming only a freshman maths level.

It’s fairly well known that the sum of the reciprocal numbers diverges: \sum_{n=1}^\infty 1/n\to\infty. Euler found, using an amazing trick the sum of the inverse squares and, in fact, the sum of the inverse of any even power. This formula is simply amazing: \sum 1/n^2 = \pi^2/6, isn’t it? Now, Riemann defined the “zeta” function, for any possible exponent:

\displaystyle{\zeta(z)=\sum_{n=1}^\infty {1\over n^z}}

So, we know that \zeta(1)=\infty, \zeta(2)=\pi^2/6, and many other values. Riemann asked: what happens when z is complex? Complex function theory is funny… We know that z=1 is a singularity.  If you do a Taylor series around, say, around z=2, the radius of convergence is the distance to the nearest singularity, so R=1. But now, your function is well defined in a circle of center z=2 and radius R=1. This means that you can expand again the function in a Taylor series from any point within that circle. And, again, the radius of convergence will be the distance to the closest singularity. This procedure is called analytical continuation.


The series of circles show how to compute the analytical continuation of a complex function...

Well… in the case of the Riemann \zeta function, the only singularity is z=1. Therefore, I can do the previous trick and… bypass it! Circle after circle, I can reach z=0 and get a value, which happens to be… -1/2. So, somehow, we can say that, if we had to give a value to the sum 1+1+1+1+…, it should be -1/2. Also, and even more amazing, \zeta(-1)=\sum_{n=1}^\infty n=1+2+3+4+\cdots=-1/12. Hey, that’s really pervert maths! Can this be useful for real life, i.e.: for physics. Well, it is used in string theory, to prove that you need (in the simplest bosonic case) dimension D=26… but that’s not true physics. Indeed, it’s needed for the computation of the Casimir effect. Maybe, I’ll devote a post to that someday. Anyway, this is the look of the Riemann zeta function in the complex plane:


The Riemann zeta function. Color hue denotes phase, and intensity denotes modulus. The white point at z=1 is the singularity.

Even more surprises… 1^2+2^2+3^2+\cdots=0. In fact, it’s easy (ok, ok… it’s easy when you know how!) to prove that \zeta(-2n)=0 for all positive n. Those are called the trivial zeroes of the Riemann zeta function (amazing!)… So, what are the non-trivial ones? Riemann found a few zeroes which were not for negative even numbers. But all of them had something in common: their real part was 1/2. And here comes the Riemann hypothesis: maybe (maybe) all the non-trivial zeroes of the \zeta function will have real part 1/2.

OK, I hear you say. I got it. But I still don’t get the fun about the title of the post, and why so much fuss about it. Here it comes…

Euler himself (all praise be given to him!) found an amazing relation, which I encourage you to prove by yourselves:

\displaystyle{\zeta(z)=\sum_{n=1}^\infty {1\over n^z} = \prod_{p \hbox{ prime}} {1\over 1-p^{-z}}}

Ahí comienza el link verdadero. Una pista para la demostración: expandimos el producto:

\displaystyle{\prod_{p \hbox{ primes}} {1\over 1-p^{-z}} = {1\over 1-2^{-z}} {1\over 1-3^{-z}} {1\over 1-5^{-z}} \cdots}

Wonderful. But 1/(1-x) can be easily recognized as the sum of a geometric series, right?

OK, in a few days, I’ll post the second part, explaining why there’s music in the primes, and how quantum mechanics might save the day…


More about Euler’s crazy sums

Recently we talked about how Euler managed, through some magic tricks, to find that the sum of the inverse of the square numbers is \pi^2/6… That was a piece of virtuosismo, but even geniuses sometimes slip up. This one is funny. You may know the sum of a geometrical series:

1+x+x^2+x^3+\cdots = {1\over 1-x}

(There are many ways to understand that formula, I’ll give you a nice one soon). Now, using the same formula, sum the inverse powers:

1+x^{-1}+x^{-2}+x^{-3}+\cdots = {1\over 1-x^{-1}} = {x\over x-1}

So far, so good. Now, imagine that we want the sum of all powers, positive and negative:

\cdots+x^{-3}+x^{-2}+x^{-1}+x^0+x^1+x^2+x^3+\cdots = {1\over 1-x} + {x\over x-1} -1 = 0

Amazing!!! The sum of all powers is zero!! Nice, ein? Euler concluded that this proved the possibility of the creation of the world from nothing, ex nihilo. So, this is easy… what is the problem with this proof?

BTW, I read this story and many others in William Dunham’s book The master of us all

I'm better than you all even when I'm wrong!

Euler’s crazy sums

Rigour is the hygiene of the mathematician, but it is not its source of nutrients… Here you have a fantastic piece of work by Leonhard Euler, where he showed how to reason mathematically, maybe without rigour, but with a rich and healthy intuition.

In the early days of calculus, people were fascinated by power series. Euler knew very well the Taylor series of the sine around zero:

\sin(x)=x-{x^3\over 3!}+{x^5\over 5!}-{x^7\over 7!}+\cdots

OK, so the sine function, in a sense, is a polynomial… Well… We know a few things about polynomials, don’t we? For example. If P(x) is a degree 3 polynomial and its roots are x_0, x_1 and x_2, then:

P(x) = K (x-x_0)(x-x_1)(x-x_2)

And the constant can be fixed if we know a single value, for example, P(0). So… let us do it with the sine function. We know its zeroes: n\pi for all integer n:

\sin(x) = K \cdots (x+3\pi) (x+2\pi) (x+\pi) x (x-\pi) (x-2\pi) (x-3\pi) \cdots

A little bit of regrouping:

{\sin(x)\over x} = K (x^2-\pi^2) (x^2-(2\pi)^2) (x^2 - (3\pi)^2) \cdots

Now, what can the constant be? If x=0, then \sin(x)/x is 1. A trick and we get rid of the K:

{\sin(x)\over x} = \left( 1 - {x^2\over \pi^2}\right) \left( 1 - {x^2\over (2\pi)^2} \right) \cdots

Waw, a remarkable formula on its own! You can check it numerically: it converges slowly beyond the first maximum, but it converges indeed. But we can get more from it. It should coincide with the Taylor series of the sine, right? (hm…) The constant term is easy, just 1. The quadratic term is not too hard either. Just add up all the products which consist of all ones and a single x^2 term. You get:

-{x^2\over \pi^2} - {x^2\over (2\pi)^2} - {x^2\over (3\pi)^2} - \cdots = -{x^2\over 3!}

to make it equal to the Taylor term for x^2. Now, make the coefficients equal:

-\sum_{n=1}^\infty {1\over n^2 \pi^2} = -{1\over 3!}

Or, equivalently,

\sum_{n=1}^\infty {1\over n^2} = {\pi^2\over 6}

Waw! Of course, there are rigorous ways to prove this formula. The first one I learnt was using Fourier series. But this one is funny, isn’t it? :)

Not all of Euler’s crazy sums were right. Some were amazingly wrong. But the wrongs of the genius are also funny… So I will soon discuss them here.

Happy summing!

Teaching fresh(wo)men calculus

I have just finished, for the second time, the calculus fall term for engineering freshwomen (and freshmen) ;) at UC3M. The classes were in English, split into two for practical sessions, around 70 students in total. It was a nice group (yes, some of my students will read this and no, I do not say this because of that… the teacher evaluation polls are already over) ;) I have been thinking about what do we teach, what is its purpose and how we should do it… and I have reached a few conclusions.

  • There are two reasons to teach maths to non-mathematicians: (a) because they will need some tools which are standard in their trade or (b) because they should learn to think, they should learn real problem solving techniques. The contents of the calculus term (derivatives and integration in one variable, basically) is already covered in high school, only a few new things are taught here (Taylor, polar coords…) So, the real reason must be the second one.
  • That’s why I have introduced two novelties: first of all, problems in “real life format”. With this I mean that they’re formulated vaguely, with no data. For example: “I want to leave my can of beer on the ground, but it is irregular and I’m afraid it might fall down. I should drink a little bit of beer so that it becomes more stable. How much?”
  • Another point that was important for me was the ability to give numerical solutions, approximations… I mean: to obtain numbers even when an analytical solution is not available. We also introduced numerical calculation techniques via octave, but it had to be out of the class hours.

My only complaint: such a course, if it has to be taught correctly, requires a rather low number of students per class. When I teach linear algebra, it’s ok for me to talk about eigenvalues to 120 students. That’s because the idea is fully different. I don’t know how to teach problem solving from the blackboard. There are always a few lucky cases in which you have to teach nothing: they already get your point, almost before you’ve finished stating it. With the rest, we normal mortals, it has to be done one by one…

Another important point. I would like to change the “blocks”. There should be four of them

  1. Visualization: sketching functions, curves in polar coordinates or parametric, surfaces… And the reverse: see data and “guess” an analytical expression. Fitting experimental data.
  2. Computing: approximation schemes, estimation skills. Tayor, mean value theorem… And numerical programming skills.
  3. Optimization: all sorts of problems where some target function has to be maximized or minimized. There are few “real life” problems which can not be re-cast in this form…
  4. Cutting into pieces and pasting back: (for want of a better name) with this I mean all kinds of problems which “reduce” to integration: areas, volumes, lengths, work of a force, average of a function, etc.

Calculus at this level can be seen by the students as a bunch of tricks. And they’re right. All of us making a life as “applied mathematicians”, we have a bag of ideas that come to our mind when we see a new problem. Applied mathematics is just that: the ability to tackle a new problem, to make the “right metaphor” with another problem that you solved years back.

Just a finishing remark: why do we have so few girls??????? I want a convincing answer, or I’ll move to nursery school next year! And I’m serious about that!

Is this the reason?

Log rules…

Here is a very simple problem which appeared with my first year calculus students. Consider the function


Its domain is {\mathbb R}-\{0\}. But now, we may take the exponent down, using the rules of the logarithm…


and now, the domain is (0,\infty)… What happens??? Try to explain it: (a) without complex numbers, (b) with them…