It’s hot when I accelerate!

Unruh effect and Hawking radiation

Let us discuss one of the most intriguing predictions of theoretical physics. Picture yourself moving through empty space with fixed acceleration, carrying along a particle detector. Despite the fact that space is empty, your detector will click sometimes. The number of clicks will increase if you accelerate further, and stop completely if you bring your acceleration to zero. It is called Unruh effect, and was predicted in 1976.

That’s weird, isn’t it? Well, we have not even scratched the surface of weirdness!

So, more weirdness. The particles will be detected at random times, and will have random energies. But, if you plot how many particles you get at each energy, you’ll get a thermal plot. I mean: the same plot that you would get from a thermal bath of particles at a given temperature T. And what is that temperature?

T = \hbar a / 2\pi c

That is called the Unruh temperature. So nice! All those universal constants… and an unexpected link between acceleration and temperature. How deep is this? We will try to uncover that.

In our previous Physics Napkin we discussed the geometry of spacetime felt by an accelerated observer: Rindler geometry. Take a look at that before jumping into this new stuff.

Has this been proved in the laboratory?

No, not at all. In fact, I am working, with my ICFO friends, in a proposal for a quantum simulation. But that’s another story, I will hold it for the next post.

So, if we have not seen it (yet), how sure are we that it is real? How far-fetched is the theory behind it? Is all this quantum gravity?

Good question! No, we don’t have any good theory of quantum gravity (I’m sorry, string theoreticians, it’s true). It’s a very clear conclusion from theories which have been thoroughly checked: quantum field theory and fixed-background general relativity. With fixed background I mean that the curvature of spacetime does not change.

Detecting particles where there were none… where does the energy come from?

From the force which keeps you accelerated! That’s true: whoever is pushing you would feel a certain drag, because some of the energy is being wasted in a creation of particles.

It's hot when I accelerate!! Ayayay!!!

It’s hot when I accelerate!! Ayayay!!!

I see \hbar appeared in the formula for the Unruh temperature. Is it a purely quantum phenomenon?

Yes, although there is a wave-like explanation to (most of) it. Whenever you move with respect to a wave source with constant speed, you will see its frequency Doppler-shifted. If you move with acceleration, the frequency will change in time. This change of frequency in time causes makes you lose track of phase, and really observe a mixture of frequencies. If you multiply frequencies by hbar, you get energies, and the result is just a thermal (Bose-Einstein) distribution!

But, really… is it quantum or not?

Yes. What is a particle? What is a vacuum? A vacuum is just the quantum state for matter which has the minimum energy, the ground state. Particles are excitations above it. All observers are equipped with a Hamiltonian, which is just a certain “way to measure energies”. Special relativity implies that all inertial observers must see the same vacuum. If the quantum state has minimal energy for an observer at rest, it will have minimal energy for all of them. But, what happens to non-inertial observers? They are equipped with a Hamiltonian, a way to measure energies, which is full of weird inertial forces and garbage. It’s no big wonder that, when they measure the energy of the vacuum, they find it’s not minimal. And, whenever it’s not minimal, it means that it’s full of particles. Yet… why a thermal distribution?

Is all this related to quantum information?

Short story: yes. As we explained in the previous post, an accelerated observer will always see an horizon appear behind him. Everything behind the horizon is lost to him, can not affect him, he can not affect it. There is a net loss of information about the system. This loss can be described as randomness, which can be read as thermal.

Long story. In quantum mechanics we distinguish two types of quantum states: pure and mixed. A pure quantum state is maximally determined, the uncertainty in its measurements is completely unavoidable. Now imagine a machine that can generate quantum systems at two possible pure states A and B, choosing which one to generate by tossing a coin which is hidden to you. The quantum system is now said to be in a mixed state: it can be in any two pure states, with certain probabilities. The system is correlated with the coin: if you could observe the coin, you would reduce your uncertainty about the quantum state.

The true vacuum, as measured by inertial observers, is a pure state. Although it is devoid of particles, it can not be said to be simple in any sense. Instead, it contains lots of correlations between different points of space. Those correlations, being purely quantum, are called entanglement. But, besides that, they are quite similar to the correlations between the quantum state and the coin.

When the horizon appears to the accelerated observer, some of those correlations are lost forever. Simply, because some points are gone forever. Your vacuum, therefore, will be in a mixed state as long as you do not have access to those points, i.e.: while the acceleration continues.

Where do we physicists use to find mixed states? In systems at a finite temperature. Each possible pure state gets a probability which depends on the quotient between its energy and the temperature. The thermal bath plays the role of a hidden coin. So, after all, it was not so strange that the vacuum, as measured by the accelerated observer, is seen as a thermal state.

Thermal dependence with position

As we explained in the previous post, the acceleration of different points in the reference frame of the (accelerated) observer are different. They increase as you approach the horizon, and become infinite there. That means that it will be hotter near the horizon, infinitely hotter, in fact.

After our explanation regarding the loss of correlations with points behind the horizon, it is not hard to understand why the Unruh effect is stronger near it. Those are the points which are more strongly correlated with the lost points.

But from a thermodynamic point of view, it is very strange to think that different points of space have different temperatures. Shouldn’t they tend to equilibrate?

No. In general relativity, in curved spacetime we learn that a system can be perfectly at thermal equilibrium with different local temperatures. Consider the space surrounding a heavy planet. Let us say that particles near the surface at at a given temperature. Some of them will escape to the outer regions, but they will lose energy in order to do so, so they will reach colder. Thus, in equilibrium systems, the temperature is proportional to the strength of gravity… again, acceleration. Everything seems to come together nicely.

And Hawking radiation?

Hawking predicted that, if you stand at rest near a black hole, you will detect a thermal bath of particles, and it will get hotter as you approach the event horizon. Is that weird or not? To us, not any more. Because in order to remain at rest near a black hole, you need a strong supporting force behind your feet. You feel a strong acceleration, which is… your weight. The way to feel no acceleration is just to fall freely. And, in that case, you would detect no Hawking radiation at all. So, Hawking radiation is just a particular case of Unruh effect.

There is the feeling in the theoretical physics community that the Unruh effect is, somehow, more fundamental than it seems. This relation between thermal effects and acceleration sounds so strange, yet everything falls into its place so easily, from so many different points of view. It’s the basis of the so-called black hole information paradox, which we will discuss some other day. There have been several attempts to take Unruh quite seriously and determine a new physical theory, typically a quantum gravity theory, out of it. The most famous may be the case of Verlinde’s entropic gravity. But that’s enough for today, isn’t it?

For references, see: Crispino et al., “The Unruh effect and its applications”.

I’ll deliver a talk about our proposal for a quantum simulator of the Unruh effect in Madrid, CSIC, C/ Serrano 123, on Monday 14th, at 12:20. You are all very welcome to come and discuss!


Scientists tend to be very visual people. We love to understand through pictures. About one year ago, we had one of those ideas which remind you why it’s so fun to be a theoretical physicist… Simple and deep. The idea was about how to represent quantum many-body wavefunctions in pictures. Speaking very coarsely, the high complexity of the wavefunction maps into fractality of the final image.

So, more slowly. As you know, bit can take only two values: 0 and 1. A qubit is a quantum bit, which can be in any linear combination of 0 and 1, like Schrödinger’s cat, which we denote by |0\rangle and |1\rangle. In other terms: a qubit is represented by two complex numbers: |\Psi\rangle = \alpha |0\rangle + \beta |1\rangle. If you have two qubits, the basic states are four: 00, 01, 10 and 11, so we get

|\Psi\rangle = \alpha_{00} |00\rangle + \alpha_{01} |01\rangle + \alpha_{10}|10\rangle + \alpha_{11}|11\rangle

If you add one qubit, the number of parameters doubles. For N qubits, you need 2^N parameters in order to specify completely the state! The task of representing those values in a picture in a meaningful way seems hopeless… Our idea is to start with a square and divide it in four quadrants. Each quadrant will be filled with a color associated with the corresponding parameter.

What if we get a second pair of qubits? Then we move to “level-2”: we split each quadrant into four parts, again, and label them according to the values of the new qubits. We can go as deeply as we want. The thermodynamical limit N\to\infty corresponds to the continuum limit.

The full description of the algorithm is in this paper from arXiv, and we have launched a webpage to publish the source code to generate the qubistic images. So, the rest of this blog entry will be just a collection of pictures with some random comments…

Qubistic view of the GS of the Heisenberg hamiltonian

This is the ground state of the Heisenberg hamiltonian for N=12 qubits. It is an antiferromagnetic system, which favours neighbouring qubits to be opposite (0-1 or 1-0). The main diagonal structures are linked to what we call a spin liquid.

These four pics correspond to the so-called half-filling Dicke states: systems in which half the qubits are 0 and the other half 1… but you do not know which are which! The four pics show the sequence as you increase the number of qubits: 8, 10, 12 and 14.

This one is the AKLT state for N=10 qu-trits (each can be in three states: -1, 0 or 1). It has some nice hidden order, known as the Haldane phase. The order shows itself quite nicely in its self-similarity.

This one is the Ising model in a transverse field undergoing a quantum phase transition… but the careful reader must have realized that it is not fitting in a square any more! Indeed, it is plotted using a different technique, mapping into triangles. Cute, ein?

But I have not mentioned its most amazing properties. The mysterious quantum entanglement can be visualized from the figures. This property of quantum systems is a strong form of correlation, much stronger than any classical system might achieve.

So, if you want to learn more, browse the paper or visit this webpage, although it is still under construction…

With warm acknowledgments to my coauthors: Piotr Midgał, Maciej Lewenstein (ICFO), Miguel I. Berganza and Germán Sierra (IFT), and also to Silvia N. Santalla and Daniel Peralta.

But, really, what is entropy?

Entropy (1): the measure of disorder. The increase in entropy is the decrease in information.

Entropy (2): the measure of the energy which is available for work.

Problem: Reconcile both definitions.

Some people tell me that there is no problem here… Yet… I have the feeling that we call entropy to many different things because we have the intuition that, in the end, they’re all the same. My main problem: entropy (1) is an epistemological magnitude, whilst entropy (2) is ontological. Confusion between these two planes have given rise to all sorts of problems.

I should explain better: entropy (1) refers to my knowledge of the world, and entropy (2) to its substance. Yet, we might be able to reconcile them. With care, of course. Let us give an example.

Imagine a box with particles bouncing inside. We have no information at all. All possible states are equally likely. With no information, there is no work we can extract from the particles in the box. But imagine that we’re given some information, such as the temperature. Then we can extract some work, if we’re clever. Now, even more: imagine that we’re given the exact position and velocity of all the particles at a given moment. Then, again if we’re clever, we can extract a lot of work from the system! The more information we have, the more work we can extract.

So that was a purely operational view on entropy. The information content –epistemological, entropy (1)– determines the amount of work we can get –entropy (2). But the ontological view fades away… The system has no intrinsic entropy. The amount of work which is available… available for whom?

Now a problem comes… the second law of thermodynamics, the most sacred of our laws of physics, states that the entropy of an isolated system tends to grow. “But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation”, as Arthur Eddington posed it.

Can the second law adapt itself to this view? Yes, it can, but the result is funny: For all observers, no matter their knowledge and their abilities, as time goes by, their information about an isolated physical system tends to reduce, and also the amount of work they can get from it.

Of course, isolated is key here. You’re supposed to do no more measurements at all! Then, evolution tends to decrease your information, increasing the initial uncertainties you might have. Is this statement non-trivial? I think it is, in the following sense: it excludes the possibility of some dynamical systems being physically realized.

Still, the operational point of view does not fully satisfy me yet. It states that, no matter how clever you are, the amount of work you can get from an isolated system decreases with time, since your information does. This maximization over the intelligence of people is disturbing… What do you think?