Hey, long time without posting. Hope this will change drastically in the near future. In the meantime… can anybody tell me *what’s that!?* :)

# Category Archives: rangers

# Who needs doors, when I can tunnel?

Tunneling is one of the mysterious features of quantum mechanics, but there is a very nice way to visualize it. There is a simulation method in order to obtain the ground state of quantum systems, called *path integral Monte Carlo*. It is based, as its name suggests, on Feynman’s path integral approach to quantum mechanics, but I will not go deeper in that… The idea is the following: a particle becomes a set of many, many copies, *beads* or *replicas*. Which one is the real particle? All of them, and none. Then, link them all, each one to the next, with a spring whose natural length is zero, and with a spring constant which increases with the mass. Now, put all the system in a “fake temperature”, which depends on … and that’s all! Simulate that, just using Monte Carlo, and the equilibrium distribution that you obtain is the ground state of the quantum system.

In the simulation, the potential is represented with the background colors: blue is low, orange is high. So, you see the potential consists of many minima, the central one is deeper. In fact, the energy of the particle is not enough to jump over the barriers… but it does not matter now. The “ring polymer” can jump, even if a classical particle can’t. The height of the barrier is not a huge problem *if it is thin*, because in that case the “spring” can stretch, and make one of the *beads* jump over it! That’s the tunneling, indeed.

So, what you’re observing in that video is the *quantum cloud*. In fact, each ring polymer represents a possible *history* for the particle, returning to the initial point. Each *bead* corresponds to the position of the particle at a certain instant, and the energy in the springs corresponds to… the kinetic energy, which will not be zero because of the uncertainty principle.

If you need more explanations (I would!), read qfluct…

# Quantum dreams

*Quantum mechanics, the dreams stuff is made of…* (David Moser)

A quantum particle, prisoner in a square box of infinite walls, starts out with minimal energy, which grows and grows, slowly… although, no matter how much energy it gathers, no matter it grows quadratically… it will never escape…

You can also see it as the vibrational modes of a square drum. It looks continuous because I interpolated between them for a smoother visualization…

# Emmy Noether

March 8th is the international working woman’s day, so I guess it’s just fair to write a blog entry about my favourite woman physicist… which happens to be Amalie (Emmy) Noether. I will not focus so much on her life, but on the most wonderful theorem on mathematical physics imagined by human minds, which was her brain-child…

About her life, I will only remind you that she was the first woman teacher at the University of Göttingen, recruited by Hilbert and Klein, in 1915. Göttingen was the most important center for theoretical physics at that time. It took a lot of arguing… One faculty member said “What will our soldiers think, when they come back home and are asked to study at the feet of women?”, and Hilbert gave his famous response: “This is a university, not a bath house”… Being a jew and socialist, she had to flee from Germany when Hitler came to power, and escaped to Russia and then to the US… You can read Wikipedia and many other sources for more info.

About her work… well, for me, the most impressive result of mathematical physics is known as Noether’s theorem, I’ll try to explain it in simple terms: if your physical system has a symmetry, then it has a conserved quantity. Conservation of energy is due to the invariance under time translation: physics is the same today or tomorrow. Conservation of momentum, due to invariance under spatial translations: physics is the same here, in Vladivostok or in alpha-Centauri. And so on. How come? I’ll try to give a derivation that makes you feel the thrill, yet does not get stuck in technical details…

Let us consider the space of all possible physical configurations of a system. In classical mechanics of point particles, a configuration is specified when you give all the positions and momenta, so a point in it will be given by . Time-evolution is a *flow* in this configuration space. A flow is just putting a vector at each point of space, indicating the direction and speed with which you should move if you’re there. But there are many other interesting flows in configuration space, which correspond to other operations different from time evolution. You might consider the flow induced by rotating the whole system, or translating it, or stretching it…

All of those flows can be expressed in terms of *generating functions*. Consider any scalar function defined on the configuration space, *f(x)*. Its flow is defined in the following way. Get the gradient, , which is a vector field. You might consider it to be the flow, but it is not convenient. We apply on it a certain matrix, call it ω, the *symplectic matrix*. This way, the flow of a function *f* is given by . The only thing that you need to know about ω is that ω*u* is always perpendicular to *u*. If you move along a direction which is perpendicular to the gradient of a function, you keep the value of that function constant, right? So, moving along the flow preserves the value of *f*. The flow of *f* preserves *f*.

Now, apply this story to time evolution. Its flow is induced by the hamiltonian: . Of course, this means that time evolution will preserve the value of *H*. OK, we knew that! The equations of motion are

What about other flows? Since I’m trying to keep things non-technical, I won’t prove the following assertions. Spatial translations are generated by the *momentum*, * f(x)=p*. Rotations are generated by the *angular momentum* (on the z-coordinate, say): … What does it mean? Let’s say that you’re rotating your system by an angle α around the z-axis. You want to know the position of all the particles after such a rotation. Then, you get the “equations of motion”:

Let’s say that we want to know how one of these functions *f* evolves with time. Then, we derivate that thing with respect to time:

This object is important, so we give a name to it, the *Poisson bracket*, *{f,H}*.

So, *{f,g}* means “how evolves *f* under the flux induced by *g*. Its main property is that *{f,g}=-{g,f}*, because of the properties of ω.

Now, Emmy Noether’s magic in action. Let us say that *f* is a symmetry of the system. This means that the hamiltonian does not evolve under the flux induced by *f*. So, *{H,f}=0*. But then, *{f,H}=0* also! And this means that *f* does not change under the flux induced by *H*, i.e: under time evolution. So, *f* is a *conserved quantity*!

And this is Noether’s theorem: for every continuous symmetry of a system, there is a conserved quantity. It is, of course, the generator of that symmetry. If you have translation symmetry, momentum is preserved. Rotation-symmetry: angular momentum is preserved. For more intrincate symmetries, there are more abstract conserved quantities. For example, the esoteric *gauge symmetry* explains, via Noether’s theorem, the conservation of charge! And the conservation of energy? That’s the easiest, it’s just the symmetry under time-evolution…

For more info, besides Wikipedia (not the best site…), check John Baez’s explanation, or this page, or any good book on classical mechanics.

*OK, this was a tribute to my favourite woman physicist of all times… But, as of today, I also want to pay tribute to the ones I’ve met in my life: Silvia, Pushpa, Mar, Lourdes, Carmen, Nuria, Lola, Nina, Sagra, Elena, Vanessa, Susana, Rosa, Arantxa, Diana and all the rest…*

# There’s music in the primes… (part I)

OK, now a new section at physicsnapkins, in which I will discuss a bit about my own research… Recently, Germán Sierra and I have submitted to the ArXiv a paper about the Riemann hypothesis, which you can see here. To be honest, the real expert in the field is Germán, my contribution is mostly technical. Anyway, I’ll try to convey here the basic ideas of the story… We’ll give a walk around the concepts, assuming only a freshman maths level.

It’s fairly well known that the sum of the reciprocal numbers diverges: . Euler found, using an amazing trick the sum of the inverse squares and, in fact, the sum of the inverse of any even power. This formula is simply amazing: , isn’t it? Now, Riemann defined the “zeta” function, for any possible exponent:

So, we know that , , and many other values. Riemann asked: what happens when *z* is complex? Complex function theory is funny… We know that *z=1* is a singularity. If you do a Taylor series around, say, around *z=2*, the radius of convergence is the distance to the nearest singularity, so *R=1*. But now, your function is well defined in a circle of center *z=2* and radius *R=1*. This means that you can expand again the function in a Taylor series from any point within that circle. And, again, the radius of convergence will be the distance to the closest singularity. This procedure is called analytical continuation.

Well… in the case of the Riemann function, the only singularity is *z=1*. Therefore, I can do the previous trick and… bypass it! Circle after circle, I can reach *z=0* and get a value, which happens to be… -1/2. So, somehow, we can say that, if we had to give a value to the sum 1+1+1+1+…, it should be -1/2. Also, and even more amazing, . Hey, that’s really pervert maths! Can this be useful for real life, i.e.: for *physics*. Well, it is used in string theory, to prove that you need (in the simplest bosonic case) dimension D=26… but that’s not true physics. Indeed, it’s needed for the computation of the Casimir effect. Maybe, I’ll devote a post to that someday. Anyway, this is the look of the Riemann zeta function in the complex plane:

Even more surprises… . In fact, it’s easy (ok, ok… it’s easy when you know how!) to prove that for all positive *n*. Those are called the *trivial zeroes* of the Riemann zeta function (amazing!)… So, what are the non-trivial ones? Riemann found a few zeroes which were not for negative even numbers. But all of them had something in common: their real part was 1/2. And here comes the **Riemann hypothesis**: maybe (maybe) all the non-trivial zeroes of the function will have real part 1/2.

OK, I hear you say. I got it. But I still don’t get the fun about the title of the post, and why so much fuss about it. Here it comes…

Euler himself (all praise be given to him!) found an amazing relation, which I encourage you to prove by yourselves:

Ahí comienza el link verdadero. Una pista para la demostración: expandimos el producto:

Wonderful. But can be easily recognized as the sum of a geometric series, right?

OK, in a few days, I’ll post the second part, explaining why there’s music in the primes, and how quantum mechanics might save the day…

# Chameleons

The old man stared at us and spoke thus: “Those were the days, in Mod Island… We lived surrounded by beautiful colorful chameleons… I remember that, the day when I reached, 17 of them were blue, 15 were red and 13 were yellow. But, whenever two chameleons of different colour met, they would start a funny dance after which both would change to the colour which neither of them had. For example, if a blue and a red chameleon met, after the dance they both would become yellow. These dances took place for a long time, until all the chameleons became the same colour, and we knew it was time to leave…”.

“You’re lying”, said Alice. The old man grinned… “How are you so sure?” And Alice replied back, “Because, with your numbers, it’s impossible that all chameleons become the same colour”.

Is Alice right?

# More about Euler’s crazy sums

Recently we talked about how Euler managed, through some magic tricks, to find that the sum of the inverse of the square numbers is … That was a piece of *virtuosismo*, but even geniuses sometimes slip up. This one is funny. You may know the sum of a geometrical series:

(There are many ways to understand that formula, I’ll give you a nice one soon). Now, using the same formula, sum the *inverse powers*:

So far, so good. Now, imagine that we want the sum of *all* powers, positive and negative:

Amazing!!! The sum of all powers is zero!! Nice, ein? Euler concluded that this proved the possibility of the creation of the world from nothing, *ex nihilo*. So, this is easy… what is the problem with this proof?

BTW, I read this story and many others in William Dunham’s book The master of us all…

# Time travel from classical to quantum mechanics

I would like to return to the time travel questions we posed on this entry. Basically, we want to understand Polchinski’s paradox, which we show in this pic So, imagine that you have a time machine. You launch a ball into it in such a way that it will come *out of it* one second before. And you are so evil that you prepare things so that the outcoming ball will collide with the incoming one, preventing it from entering the machine. The advantage of this paradox is that it does not involve free will, or people killing gradpas (the GPA, grandfathers protection association, has filed a complaint on the theoretical physics community, and for good reason).

No grandpas are killed, sure, but maybe the full idea of time-travel is killed by this paradox. Why should we worry? Because general relativity predicts the possibility of time-travel, and general relativity is a beautiful and well-tested physical theory. We’re worried that it might not be consistent…

There is a seminal paper by Kip Thorne and coworkers (PRD 44, 1077) which you can find here, which advances the possibility that there are no paradoxes at all… how come? In the machine described above we have focused on a trajectory which gives an inconsistent history. But there might be other *similar* trajectories which give consistent histories. In fact, there are infinite of them, so our problem is now which one to choose! But let us not go too fast, let us describe how would the “nice” trajectories come.

A possible alternate history: the ball travels towards the machine with speed v, but out of it comes, one second before the collision, a copy of itself with speed v’>v, in such a way that the collision does not change the direction of the initial ball (a glancing collision), but it also accelerates it… up to v’, thus closing the circle! There are no problems with conservation of energy and momentum, since the final result is a ball with speed v…

Thorne et al. described, for a case that was similar to our own, infinitely many consistent trajectories… And the question is left open: is there any configuration which gives *no consistent trajectories at all?* So far, none has been found, but also there is no proof for this.

And what happens when we have more than one possible consistent trajectory? My feeling is that we’re forced to go **quantum**! Classical physics is just an approximation. Nature, really, follows all paths, and make them interfere. But if there is a minimum action path, then it, under some conditions, may be the most important one. Quantum mechanics is happy with lots of consistent histories: they would just interfere… And a lot of funny things happen then, but let us leave that for another post…

So, what do you think? It will always be possible to find a consistent history, or not? Are there true paradoxes in time travel?

# Drawing knots

A problem came to my desk from the hands of Dani (a nice source of problems, btw): drawing knots on a computer! I messed around the web and found a couple of pstricks… but they were not general enough, so I made up my mind to try my own generator. This way xknots was born.

Xknots reads a file in a certain format and renders a postscript file for a 2D view of the knot. What is a 2D view of the knot? OK, here you have one, which goes by the name of trefoil knot:

First of all, *mathematical* knots are closed curves on . So, no dangling ends. A 2D view is just flattening the knot, and marking at each crossing which thread is above which. My idea was to invent a *description rule* for each knot. First, we state the number of *crossings*. So, 3 in our case. In order to understand a crossing, let us look at the next picture:

There are two types of crossings: L and R. All of them have four legs, numbered 1 to 4. So, in order to describe the knot, we give the coordinates of the crossings, along with their type. For example:

(200,100) L

That’s a nice crossing description. We can also, if needed, specify the angles:

(200,100) L A 60 90

The “A 60 90” means that the first leg will point at 60 degrees (counterclockwise) from the X axis, and the angle between the 1 and 2 legs is 90 degrees. Once the crossings are done, we join them with lines, for example:

1-2 3-4

This means: join the 2nd leg of the 1st crossing to the 4th leg of the 3rd crossing. If needed, we can specify how “extended” that line should be. This way,

1-2 3-4 F 2

means that this line wants to go very far away from the shortest path (the default is F 1). The full code for the trefoil is:

3

(120,120) L A 60 120

(180,120) L A 0 120

(150,172) R A 60 60

1-1 3-3

1-2 3-2 F 2

1-3 2-4 F 2

1-4 2-3

2-1 3-1 F 2

2-2 3-4

(all calculated to make a nice equilateral triangle). Here you have a couple of knots more:

The first is the “borromean rings”, the second goes by the funny name of … So, you can find the source code (C++ for linux, but pretty standard), more ideas and more explanations at the main webpage of the project…

And thanks to Dani & Alberto!

# Euler’s crazy sums

Rigour is the hygiene of the mathematician, but it is not its source of nutrients… Here you have a fantastic piece of work by Leonhard Euler, where he showed how to reason mathematically, maybe without rigour, but with a rich and healthy intuition.

In the early days of calculus, people were fascinated by power series. Euler knew very well the Taylor series of the sine around zero:

OK, so the sine function, *in a sense*, is a polynomial… Well… We know *a few things* about polynomials, don’t we? For example. If is a degree 3 polynomial and its roots are , and , then:

And the constant can be fixed if we know a single value, for example, . So… let us do it with the sine function. We know its zeroes: for all integer :

A little bit of regrouping:

Now, what can the constant be? If , then is 1. A trick and we get rid of the :

Waw, a remarkable formula on its own! You can check it numerically: it converges slowly beyond the first maximum, but it converges indeed. But we can get more from it. It *should* coincide with the Taylor series of the sine, right? (hm…) The constant term is easy, just 1. The quadratic term is not too hard either. Just add up all the products which consist of all ones and a single term. You get:

to make it equal to the Taylor term for . Now, make the coefficients equal:

Or, equivalently,

Waw! Of course, there are *rigorous* ways to prove this formula. The first one I learnt was using Fourier series. But this one is funny, isn’t it? :)

Not all of Euler’s crazy sums were right. Some were amazingly wrong. But the wrongs of the genius are also funny… So I will soon discuss them here.

Happy summing!