# Drunk Euclid still rules

On random metrics and the KPZ universality

Straight lines and circles, polygons and exact constructions. That is the clear and shiny paradise of Euclid and classical geometry. The paradigm of perfect connection between the mind and the reality. No need for awkward limiting processes, just unrefutable syllogisms. A cathedral of mathematics and, at the same time, the most successful
physical theory ever designed, since it is validated by the experiments millions of times a day. Yet, what happens when Euclid does not goes easy on ouzo? Can we still play such good geometry? Surprisingly, the answer is yes.

When Euclid gets drunk, instead of the classical plane, we get a wiggly surface. Assume that you crumple your sheet of paper on which you have drawn some circles and straight lines. They will now look crumpled to you. But, I hear you say, there is nothing you can say about them any more, since you don’t know how the paper was crumpled!

OK, let us start easier. Let us assume that you have a surface or, for math jedis, a two-dimensional riemannian manifold. The key concept is that of distance (also called metric). Let us assume that you put some smart ants on the surface which can measure distances between pairs of points, and these distances are smooth and blah-blah. The role which circles used to play on the plane belongs now to balls which are the set of points which ants can reach from a fixed one in a given time. Balls can have all kinds of shapes, as you can see in the figure below.

What about straight lines? They are now called geodesics, which are (ahem-ahem) the curves of shortest length between two points. Balls and geodesics fulfill some nice geometric relations, such as orthogonality (i.e. perpendicularity) according to the metric. The next figure shows the set of geodesics emanating from a point, along with the balls surrounding it, for a surface with the shape of an eggcrate.

So, what happens if we crumple the Euclidean plane, randomly (angrily, if needed)? Then the metric on the manifold becomes random. We will assume that wrinkles at one point of the paper sheet are independent of wrinkles at points far away. In formal terms, we say that the metric only has local correlations. Let us now draw balls around a given point.

Now the balls are rather rough. That is reasonable. Rough, but not crazily rough, just a bit. In fact, they look like balls drawn by a slightly drunk mathematician. Moreover, they have a certain appeal to it. Maybe they are fractal curves? Well, they are. And their dimension
is d=3/2, independently of how you crumple exactly your manifold, as long as you only study balls which are much larger than the distance above which wrinkles are correlated.

If you define the roughness $W$ of a ball as the width of the ring which can contain it, then you get a pretty nice result: $W\approx R^{1/3}$. Again a very nice and simple exponent, which shows that drunk geometry hides many interesting secrets. Also the geodesics have fractal behavior, if you are wondering. But we have just scratched the surface of this ethylic paradise. Suffice it to say that we have found the pervasive (and mysterious) Tracy-Widom distribution when we measured the fluctuations on the local radius of the balls.

Earlier this year we published those results, along with S.N. Santalla, T. LaGatta and R. Cuerno. Don’t miss the paper, which has very nice pictures, and and even nicer introduction video, in which I crumple actual paper in front of the camera!

How come we get such nice results, independently of how we crumple the manifold? The magic lies in the notion of universality. Imagine that you have some theory defined by its small-scale behavior. When you look from very far away, some details are forgotten and some get even more relevant. This procedure of looking from further and further away is called renormalization. Typically, only a few details are relevant when you are really far from the smallest scale. And those few details make up the universality class.

So, balls in random manifolds share universality class with other phenomena… from physics! (Well, if you know of a clear boundary between physics and math, let me know.) Growth of amorphous crystals from a vapour, forest fires, cell colonies… Basically, growth phenomena in noisy conditions are known to belong to the Kardar-Parisi-Zhang (KPZ) class. They share the fractal dimension of the interfaces and the Tracy-Widom distribution for the fluctuations.

Why is Nature so kind to us? Why does she agree to forget most of the mumbo-jumbo from the small scale details, so as we can do physics and math on a large scale that makes any sense? Somehow this connects with one of the deepest questions in science, as posed by Eugene Wigner: “why is mathematics so effective in describing Nature?”

# Quantum particle near an event horizon

As the next episode in our series about the Unruh effect (it gets hot when you accelerate), here can watch a video I have prepared which depicts how a quantum particle behaves near an event horizon.

So, what are we watching? The left and right panels show the spin-down and spin-up wavefunctions for a massless Dirac particle (a massless electron), initially at rest in Rindler spacetime. Colors correspond to phase. Because of the principle of equivalence, there are two alternate physical interpretations:

• You are moving with constant acceleration rightwards. At time t=0 you drop a Dirac particle. It seems to move leftwards, just because you leave it behind. When it gets far away, it slows down. This is due to relativistic time-dilation.
• There is a uniform gravitational field pointing leftwards. That’s why the Dirac particle accelerates in that direction. As it falls, it slows down. This is due to gravitational redshift.

Of course, the interference pattern which develops at the center is just quantum mechanics, nothing else. But when the particle reaches the edges of the box (top, bottom and right), new interference patterns appear which are spureous to our problem. That’s just the handicap of a finite-size simulation.

Nice, ein? This was work we developed at ICFO, Barcelona, along with Maciej Lewenstein, Alessio Celi and Jarek Korbicz. I have just showed it as a premiere during the Quantum gases meeting at CSIC in Madrid.

# It’s hot when I accelerate!

Let us discuss one of the most intriguing predictions of theoretical physics. Picture yourself moving through empty space with fixed acceleration, carrying along a particle detector. Despite the fact that space is empty, your detector will click sometimes. The number of clicks will increase if you accelerate further, and stop completely if you bring your acceleration to zero. It is called Unruh effect, and was predicted in 1976.

That’s weird, isn’t it? Well, we have not even scratched the surface of weirdness!

So, more weirdness. The particles will be detected at random times, and will have random energies. But, if you plot how many particles you get at each energy, you’ll get a thermal plot. I mean: the same plot that you would get from a thermal bath of particles at a given temperature T. And what is that temperature?

$T = \hbar a / 2\pi c$

That is called the Unruh temperature. So nice! All those universal constants… and an unexpected link between acceleration and temperature. How deep is this? We will try to uncover that.

In our previous Physics Napkin we discussed the geometry of spacetime felt by an accelerated observer: Rindler geometry. Take a look at that before jumping into this new stuff.

Has this been proved in the laboratory?

No, not at all. In fact, I am working, with my ICFO friends, in a proposal for a quantum simulation. But that’s another story, I will hold it for the next post.

So, if we have not seen it (yet), how sure are we that it is real? How far-fetched is the theory behind it? Is all this quantum gravity?

Good question! No, we don’t have any good theory of quantum gravity (I’m sorry, string theoreticians, it’s true). It’s a very clear conclusion from theories which have been thoroughly checked: quantum field theory and fixed-background general relativity. With fixed background I mean that the curvature of spacetime does not change.

Detecting particles where there were none… where does the energy come from?

From the force which keeps you accelerated! That’s true: whoever is pushing you would feel a certain drag, because some of the energy is being wasted in a creation of particles.

It’s hot when I accelerate!! Ayayay!!!

I see $\hbar$ appeared in the formula for the Unruh temperature. Is it a purely quantum phenomenon?

Yes, although there is a wave-like explanation to (most of) it. Whenever you move with respect to a wave source with constant speed, you will see its frequency Doppler-shifted. If you move with acceleration, the frequency will change in time. This change of frequency in time causes makes you lose track of phase, and really observe a mixture of frequencies. If you multiply frequencies by hbar, you get energies, and the result is just a thermal (Bose-Einstein) distribution!

But, really… is it quantum or not?

Yes. What is a particle? What is a vacuum? A vacuum is just the quantum state for matter which has the minimum energy, the ground state. Particles are excitations above it. All observers are equipped with a Hamiltonian, which is just a certain “way to measure energies”. Special relativity implies that all inertial observers must see the same vacuum. If the quantum state has minimal energy for an observer at rest, it will have minimal energy for all of them. But, what happens to non-inertial observers? They are equipped with a Hamiltonian, a way to measure energies, which is full of weird inertial forces and garbage. It’s no big wonder that, when they measure the energy of the vacuum, they find it’s not minimal. And, whenever it’s not minimal, it means that it’s full of particles. Yet… why a thermal distribution?

Is all this related to quantum information?

Short story: yes. As we explained in the previous post, an accelerated observer will always see an horizon appear behind him. Everything behind the horizon is lost to him, can not affect him, he can not affect it. There is a net loss of information about the system. This loss can be described as randomness, which can be read as thermal.

Long story. In quantum mechanics we distinguish two types of quantum states: pure and mixed. A pure quantum state is maximally determined, the uncertainty in its measurements is completely unavoidable. Now imagine a machine that can generate quantum systems at two possible pure states A and B, choosing which one to generate by tossing a coin which is hidden to you. The quantum system is now said to be in a mixed state: it can be in any two pure states, with certain probabilities. The system is correlated with the coin: if you could observe the coin, you would reduce your uncertainty about the quantum state.

The true vacuum, as measured by inertial observers, is a pure state. Although it is devoid of particles, it can not be said to be simple in any sense. Instead, it contains lots of correlations between different points of space. Those correlations, being purely quantum, are called entanglement. But, besides that, they are quite similar to the correlations between the quantum state and the coin.

When the horizon appears to the accelerated observer, some of those correlations are lost forever. Simply, because some points are gone forever. Your vacuum, therefore, will be in a mixed state as long as you do not have access to those points, i.e.: while the acceleration continues.

Where do we physicists use to find mixed states? In systems at a finite temperature. Each possible pure state gets a probability which depends on the quotient between its energy and the temperature. The thermal bath plays the role of a hidden coin. So, after all, it was not so strange that the vacuum, as measured by the accelerated observer, is seen as a thermal state.

Thermal dependence with position

As we explained in the previous post, the acceleration of different points in the reference frame of the (accelerated) observer are different. They increase as you approach the horizon, and become infinite there. That means that it will be hotter near the horizon, infinitely hotter, in fact.

After our explanation regarding the loss of correlations with points behind the horizon, it is not hard to understand why the Unruh effect is stronger near it. Those are the points which are more strongly correlated with the lost points.

But from a thermodynamic point of view, it is very strange to think that different points of space have different temperatures. Shouldn’t they tend to equilibrate?

No. In general relativity, in curved spacetime we learn that a system can be perfectly at thermal equilibrium with different local temperatures. Consider the space surrounding a heavy planet. Let us say that particles near the surface at at a given temperature. Some of them will escape to the outer regions, but they will lose energy in order to do so, so they will reach colder. Thus, in equilibrium systems, the temperature is proportional to the strength of gravity… again, acceleration. Everything seems to come together nicely.

Hawking predicted that, if you stand at rest near a black hole, you will detect a thermal bath of particles, and it will get hotter as you approach the event horizon. Is that weird or not? To us, not any more. Because in order to remain at rest near a black hole, you need a strong supporting force behind your feet. You feel a strong acceleration, which is… your weight. The way to feel no acceleration is just to fall freely. And, in that case, you would detect no Hawking radiation at all. So, Hawking radiation is just a particular case of Unruh effect.

There is the feeling in the theoretical physics community that the Unruh effect is, somehow, more fundamental than it seems. This relation between thermal effects and acceleration sounds so strange, yet everything falls into its place so easily, from so many different points of view. It’s the basis of the so-called black hole information paradox, which we will discuss some other day. There have been several attempts to take Unruh quite seriously and determine a new physical theory, typically a quantum gravity theory, out of it. The most famous may be the case of Verlinde’s entropic gravity. But that’s enough for today, isn’t it?

For references, see: Crispino et al., “The Unruh effect and its applications”.

I’ll deliver a talk about our proposal for a quantum simulator of the Unruh effect in Madrid, CSIC, C/ Serrano 123, on Monday 14th, at 12:20. You are all very welcome to come and discuss!

# Feeling acceleration (Rindler spacetime)

This is the first article of a series on the Unruh effect. The final aim is to discuss a new paper on which I am working with the ICFO guys, about a proposal for a quantum simulator to demonstrate how those things work. We are going to discuss some rather tough stuff: Rindler spacetime, quantum field theory in curved spacetime, Hawking radiation, inversion of statistics… and it gets mixed with all the funny stories of cold atoms in optical lattices. I’ll do my best to focus on the conceptual issues, leaving all the technicalities behind.

Our journey starts with special relativity. Remember Minkowski spacetime diagrams? The horizontal axis is space, the vertical one is time. The next figure depicts a particle undergoing constant acceleration rightwards. As time goes to infinity, the velocity approaches c, which is the diagonal line. But also, as time goes to minus infinity, the velocity approaches -c. We’ve arranged things so that, at time t=0, the particle is at x=1.

Minkowski diagram of an accelerated particle.

Now we are told that the particle is, really, a vehicle carrying our friend Alice inside. Since the real acceleration points rightwards, she feels a leftwards uniform gravity field. Her floor, therefore, is the left wall.

Alice in her left. Acceleration points rightwards, “gravity” points leftwards.

Are you ready for a nice paradox? This one is called Bell’s spaceship paradox. Now, imagine that Bob is also travelling with the same acceleration as Alice, but starting a bit behind her. Their trajectories can be seen in the figure

Alice and Bob travel with the same acceleration. Their distance, from our point of view, is constant.

From our point of view, they travel in parallel, their distance stays constant through time. So, we could have joined them with a rigid bar from the beginning. Wait, something weird happens now. As they gain speed, the rod shrinks for you… This is one of those typical paradoxes from special relativity, which only appear to be so because we don’t take into account that space and time measures depend on the point of view. This paradox is readily solved when we realize that, from Alice’s point of view, Bob lags behind! So, in order to keep up with her, and keep the distance constant, Bob should accelerate faster than her!

So, let us now shift to Alice’s point of view. Objects at a fixed location at her left move with higher acceleration than she does, and objects at her right move with lower acceleration. Her world must be pretty strange. How does physics look to her?

One of the fascinating things about general relativity is how it can be brought smoothly from special relativity when considering accelerating observers. In order to describe gravity, general relativity uses the concept of curved spacetime. In order to describe how Alice feels the world around her we can also use the concept of curved spacetime. It’s only logical, Mr Spock, since the principle of equivalence states that you can not distinguish acceleration from a (local) gravity field.

Fermi and Walker explained how to find the curved spacetime which describes how any accelerated observer feels space around her, no matter how complicated her trajectory is. The case of Alice is specially simple, but will serve as an illustration.

The basic idea is that of tetrad, the set of four vectors which, at each point, define the local reference frame. In German, they call them “vier-bein”, four-legs, which sounds nerdier. Look at the next figure. At any moment, Alice’s trajectory is described by a velocity 4-vector v. Any particle, it its own reference frame, has a velocity 4-vector (1,0,0,0). Therefore, we define Alice’s time-vector as v. What happens with space-vectors? They must be rotated so that the speed of light at her point is preserved. So, if the time-vector rotates a given angle, the space-vector rotates the same vector in the opposite direction, so the bisector stays fixed.

The local frames of reference for Alice, at two different times.

Now, each point can be given a different set of “Alice coordinates”, according to local time and local space from Alice point of view. But this change of coordinates is non-linear, and does funny things. The first problem appears when we realize that the space-like lines cross at a certain point! What can this mean? That it makes no sense to use this system of coordinates beyond that point. That point must be, somehow, special.

In fact, events at the left of the intersection point can not affect Alice in any way! In order to see why, just consider that, from our point of view, a light-ray emmited there will not intersect Alice’s trajectory. Everything at the left of the critical point is lost forever to her. Does this sound familiar? It should be: it is similar to the event horizon of a black hole.

Red: what Alice can’t see. Green: where Alice can’t be seen.

Let us assume that you did all the math in order to find out how does spacetime look to Alice. The result is called Rindler spacetime, described by the so-called Rindler metric. In case you see it around, it looks like this

$ds^2=(ax)^2 dt^2 - dx^2 - dy^2 - dz^2$

Don’t worry if you don’t really know what that means. Long story short: when Alice looks at points at her left (remember, gravity points leftwards), she sees a lower speed of light. Is that even possible? That is against the principle of relativity, isn’t it? No! The principle of relativity talks about inertial observers. Alice is not.

So, again: points at her left have lower speeds of light. Therefore, relativistic effects are “more notorious”. Even worse: as you move leftwards, this “local speed of light” decreases more and more… until it reaches zero! Exactly at the “special point”, where Alice coordinates behaved badly. What happens there? It’s an horizon! Where time stood still.

The world for Alice, Rindler spacetime: speed of light depends on position, and becomes zero at the horizon.

Imagine that Alice drops a ball, just opening her hand. It “falls” leftwards with acceleration. OK, OK, it’s really Alice leaving it behind, but we’re describing things from her point of view. Now imagine that Bob is inside the ball, trying to describe his experiences to Alice. Bob just feels normal, from his point of view… he’s just an inertial observer. But Alice sees Bob talking more and more slowly, as he approaches the horizon. Then, he friezes at that point. Less and less photons arrive, and they are highly redshifted (they lose energy), because they had to climb up against the gravitational potential. Finally, he becomes too dim to be recognized, and Alice loses sight of him.

That description would go, exactly, for somebody staying fixed near a black hole dropping a ball inside it. The event horizons are really similar. In both cases, the observer is accelerated: you must feel an acceleration in order to stay fixed near a black hole! As Wheeler used to say, the problem of weight is not a problem of gravitation. Gravitation only explains free fall. The problem of weight is a problem in solid state physics!!

For more information, see Misner, Thorne and Wheeler’s Gravitation, chapter 6. It’s a classic. I wish to thank Alessio, Jarek and Silvia for suffering my process of understanding…

# Qubism

Scientists tend to be very visual people. We love to understand through pictures. About one year ago, we had one of those ideas which remind you why it’s so fun to be a theoretical physicist… Simple and deep. The idea was about how to represent quantum many-body wavefunctions in pictures. Speaking very coarsely, the high complexity of the wavefunction maps into fractality of the final image.

So, more slowly. As you know, bit can take only two values: 0 and 1. A qubit is a quantum bit, which can be in any linear combination of 0 and 1, like SchrÃ¶dinger’s cat, which we denote by $|0\rangle$ and $|1\rangle$. In other terms: a qubit is represented by two complex numbers: $|\Psi\rangle = \alpha |0\rangle + \beta |1\rangle$. If you have two qubits, the basic states are four: 00, 01, 10 and 11, so we get

$|\Psi\rangle = \alpha_{00} |00\rangle + \alpha_{01} |01\rangle + \alpha_{10}|10\rangle + \alpha_{11}|11\rangle$

If you add one qubit, the number of parameters doubles. For N qubits, you need $2^N$ parameters in order to specify completely the state! The task of representing those values in a picture in a meaningful way seems hopeless… Our idea is to start with a square and divide it in four quadrants. Each quadrant will be filled with a color associated with the corresponding parameter.

What if we get a second pair of qubits? Then we move to “level-2”: we split each quadrant into four parts, again, and label them according to the values of the new qubits. We can go as deeply as we want. The thermodynamical limit $N\to\infty$ corresponds to the continuum limit.

The full description of the algorithm is in this paper from arXiv, and we have launched a webpage to publish the source code to generate the qubistic images. So, the rest of this blog entry will be just a collection of pictures with some random comments…

This is the ground state of the Heisenberg hamiltonian for $N=12$ qubits. It is an antiferromagnetic system, which favours neighbouring qubits to be opposite (0-1 or 1-0). The main diagonal structures are linked to what we call a spin liquid.

These four pics correspond to the so-called half-filling Dicke states: systems in which half the qubits are 0 and the other half 1… but you do not know which are which! The four pics show the sequence as you increase the number of qubits: 8, 10, 12 and 14.

This one is the AKLT state for N=10 qu-trits (each can be in three states: -1, 0 or 1). It has some nice hidden order, known as the Haldane phase. The order shows itself quite nicely in its self-similarity.

This one is the Ising model in a transverse field undergoing a quantum phase transition… but the careful reader must have realized that it is not fitting in a square any more! Indeed, it is plotted using a different technique, mapping into triangles. Cute, ein?

But I have not mentioned its most amazing properties. The mysterious quantum entanglement can be visualized from the figures. This property of quantum systems is a strong form of correlation, much stronger than any classical system might achieve.

So, if you want to learn more, browse the paper or visit this webpage, although it is still under construction…

With warm acknowledgments to my coauthors: Piotr MidgaÅ‚, Maciej Lewenstein (ICFO), Miguel I. Berganza and GermÃ¡n Sierra (IFT), and also to Silvia N. Santalla and Daniel Peralta.

# Rough is beautiful (sometimes)

No posts for three weeks… you know, we’ve been revolting in Spain, and there are times in which one has to care for politics. But physics is a jealous lover… :)

So, we have published a paper on kinetic roughening. What does it mean? OK, imagine that, while your mind is roaming through some the intricacies of a physics problem, the corner of your napkin falls into your coffee cup. You see how the liquid climbs up, and the interface which separates the dry and wet parts of the napkin becomes rough. Other examples: surface gowth, biological growth (also tumors), ice growing on your window, a forest fire propagating… Rough interfaces appear in many different contexts.

We have developed a model for those phenomena, and simulated it on a computer. Basically, the interface at any point is a curve. It grows always in the normal direction, and the growth rate is random. The growth, also, is faster in the concavities, and slower in the convex regions. After a while, the interfaces develop fractal morphology. I will show you a couple of videos, one in which the interface starts out flat, and another one in which it starts as a circle. The first looks more like the flames of hell, the second more like a tumor.

The fractal properties of those interfaces are very interesting… but also a bit hard to explain, so I promise to come back to them in a (near) future.

The work has been done with Silvia Santalla and Rodolfo Cuerno, from Universidad Carlos III de Madrid. Silvia has presented it at FisEs’11, in Barcelona, a couple of hours ago, so I got permission at last to upload the videos… ;) The paper is published in JSTAT and the ArXiv (free to read).

# There’s music in the primes… (part I)

OK, now a new section at physicsnapkins, in which I will discuss a bit about my own research… Recently, GermÃ¡n Sierra and I have submitted to the ArXiv a paper about the Riemann hypothesis, which you can see here. To be honest, the real expert in the field is GermÃ¡n, my contribution is mostly technical. Anyway, I’ll try to convey here the basic ideas of the story… We’ll give a walk around the concepts, assuming only a freshman maths level.

It’s fairly well known that the sum of the reciprocal numbers diverges: $\sum_{n=1}^\infty 1/n\to\infty$. Euler found, using an amazing trick the sum of the inverse squares and, in fact, the sum of the inverse of any even power. This formula is simply amazing: $\sum 1/n^2 = \pi^2/6$, isn’t it? Now, Riemann defined the “zeta” function, for any possible exponent:

$\displaystyle{\zeta(z)=\sum_{n=1}^\infty {1\over n^z}}$

So, we know that $\zeta(1)=\infty$, $\zeta(2)=\pi^2/6$, and many other values. Riemann asked: what happens when z is complex? Complex function theory is funny… We know that z=1 is a singularity.Â  If you do a Taylor series around, say, around z=2, the radius of convergence is the distance to the nearest singularity, so R=1. But now, your function is well defined in a circle of center z=2 and radius R=1. This means that you can expand again the function in a Taylor series from any point within that circle. And, again, the radius of convergence will be the distance to the closest singularity. This procedure is called analytical continuation.

The series of circles show how to compute the analytical continuation of a complex function...

Well… in the case of the Riemann $\zeta$ function, the only singularity is z=1. Therefore, I can do the previous trick and… bypass it! Circle after circle, I can reach z=0 and get a value, which happens to be… -1/2. So, somehow, we can say that, if we had to give a value to the sum 1+1+1+1+…, it should be -1/2. Also, and even more amazing, $\zeta(-1)=\sum_{n=1}^\infty n=1+2+3+4+\cdots=-1/12$. Hey, that’s really pervert maths! Can this be useful for real life, i.e.: for physics. Well, it is used in string theory, to prove that you need (in the simplest bosonic case) dimension D=26… but that’s not true physics. Indeed, it’s needed for the computation of the Casimir effect. Maybe, I’ll devote a post to that someday. Anyway, this is the look of the Riemann zeta function in the complex plane:

The Riemann zeta function. Color hue denotes phase, and intensity denotes modulus. The white point at z=1 is the singularity.

Even more surprises… $1^2+2^2+3^2+\cdots=0$. In fact, it’s easy (ok, ok… it’s easy when you know how!) to prove that $\zeta(-2n)=0$ for all positive n. Those are called the trivial zeroes of the Riemann zeta function (amazing!)… So, what are the non-trivial ones? Riemann found a few zeroes which were not for negative even numbers. But all of them had something in common: their real part was 1/2. And here comes the Riemann hypothesis: maybe (maybe) all the non-trivial zeroes of the $\zeta$ function will have real part 1/2.

OK, I hear you say. I got it. But I still don’t get the fun about the title of the post, and why so much fuss about it. Here it comes…

Euler himself (all praise be given to him!) found an amazing relation, which I encourage you to prove by yourselves:

$\displaystyle{\zeta(z)=\sum_{n=1}^\infty {1\over n^z} = \prod_{p \hbox{ prime}} {1\over 1-p^{-z}}}$

AhÃ­ comienza el link verdadero. Una pista para la demostraciÃ³n: expandimos el producto:

$\displaystyle{\prod_{p \hbox{ primes}} {1\over 1-p^{-z}} = {1\over 1-2^{-z}} {1\over 1-3^{-z}} {1\over 1-5^{-z}} \cdots}$

Wonderful. But $1/(1-x)$ can be easily recognized as the sum of a geometric series, right?

OK, in a few days, I’ll post the second part, explaining why there’s music in the primes, and how quantum mechanics might save the day…