Drunk Euclid still rules

On random metrics and the KPZ universality

Straight lines and circles, polygons and exact constructions. That is the clear and shiny paradise of Euclid and classical geometry. The paradigm of perfect connection between the mind and the reality. No need for awkward limiting processes, just unrefutable syllogisms. A cathedral of mathematics and, at the same time, the most successful
physical theory ever designed, since it is validated by the experiments millions of times a day. Yet, what happens when Euclid does not goes easy on ouzo? Can we still play such good geometry? Surprisingly, the answer is yes.

drankeuclid3When Euclid gets drunk, instead of the classical plane, we get a wiggly surface. Assume that you crumple your sheet of paper on which you have drawn some circles and straight lines. They will now look crumpled to you. But, I hear you say, there is nothing you can say about them any more, since you don’t know how the paper was crumpled!

OK, let us start easier. Let us assume that you have a surface or, for math jedis, a two-dimensional riemannian manifold. The key concept is that of distance (also called metric). Let us assume that you put some smart ants on the surface which can measure distances between pairs of points, and these distances are smooth and blah-blah. The role which circles used to play on the plane belongs now to balls which are the set of points which ants can reach from a fixed one in a given time. Balls can have all kinds of shapes, as you can see in the figure below.

rmetric1What about straight lines? They are now called geodesics, which are (ahem-ahem) the curves of shortest length between two points. Balls and geodesics fulfill some nice geometric relations, such as orthogonality (i.e. perpendicularity) according to the metric. The next figure shows the set of geodesics emanating from a point, along with the balls surrounding it, for a surface with the shape of an eggcrate.

exampleSo, what happens if we crumple the Euclidean plane, randomly (angrily, if needed)? Then the metric on the manifold becomes random. We will assume that wrinkles at one point of the paper sheet are independent of wrinkles at points far away. In formal terms, we say that the metric only has local correlations. Let us now draw balls around a given point.

rmetricNow the balls are rather rough. That is reasonable. Rough, but not crazily rough, just a bit. In fact, they look like balls drawn by a slightly drunk mathematician. Moreover, they have a certain appeal to it. Maybe they are fractal curves? Well, they are. And their dimension
is d=3/2, independently of how you crumple exactly your manifold, as long as you only study balls which are much larger than the distance above which wrinkles are correlated.

If you define the roughness W of a ball as the width of the ring which can contain it, then you get a pretty nice result: W\approx R^{1/3}. Again a very nice and simple exponent, which shows that drunk geometry hides many interesting secrets. Also the geodesics have fractal behavior, if you are wondering. But we have just scratched the surface of this ethylic paradise. Suffice it to say that we have found the pervasive (and mysterious) Tracy-Widom distribution when we measured the fluctuations on the local radius of the balls.

Earlier this year we published those results, along with S.N. Santalla, T. LaGatta and R. Cuerno. Don’t miss the paper, which has very nice pictures, and and even nicer introduction video, in which I crumple actual paper in front of the camera!

How come we get such nice results, independently of how we crumple the manifold? The magic lies in the notion of universality. Imagine that you have some theory defined by its small-scale behavior. When you look from very far away, some details are forgotten and some get even more relevant. This procedure of looking from further and further away is called renormalization. Typically, only a few details are relevant when you are really far from the smallest scale. And those few details make up the universality class.

So, balls in random manifolds share universality class with other phenomena… from physics! (Well, if you know of a clear boundary between physics and math, let me know.) Growth of amorphous crystals from a vapour, forest fires, cell colonies… Basically, growth phenomena in noisy conditions are known to belong to the Kardar-Parisi-Zhang (KPZ) class. They share the fractal dimension of the interfaces and the Tracy-Widom distribution for the fluctuations.

Why is Nature so kind to us? Why does she agree to forget most of the mumbo-jumbo from the small scale details, so as we can do physics and math on a large scale that makes any sense? Somehow this connects with one of the deepest questions in science, as posed by Eugene Wigner: “why is mathematics so effective in describing Nature?”

Advertisement

Rough is beautiful (sometimes)

No posts for three weeks… you know, we’ve been revolting in Spain, and there are times in which one has to care for politics. But physics is a jealous lover… :)

So, we have published a paper on kinetic roughening. What does it mean? OK, imagine that, while your mind is roaming through some the intricacies of a physics problem, the corner of your napkin falls into your coffee cup. You see how the liquid climbs up, and the interface which separates the dry and wet parts of the napkin becomes rough. Other examples: surface gowth, biological growth (also tumors), ice growing on your window, a forest fire propagating… Rough interfaces appear in many different contexts.

We have developed a model for those phenomena, and simulated it on a computer. Basically, the interface at any point is a curve. It grows always in the normal direction, and the growth rate is random. The growth, also, is faster in the concavities, and slower in the convex regions. After a while, the interfaces develop fractal morphology. I will show you a couple of videos, one in which the interface starts out flat, and another one in which it starts as a circle. The first looks more like the flames of hell, the second more like a tumor.

The fractal properties of those interfaces are very interesting… but also a bit hard to explain, so I promise to come back to them in a (near) future.

The work has been done with Silvia Santalla and Rodolfo Cuerno, from Universidad Carlos III de Madrid. Silvia has presented it at FisEs’11, in Barcelona, a couple of hours ago, so I got permission at last to upload the videos… ;) The paper is published in JSTAT and the ArXiv (free to read).

But, really, what is entropy?

Entropy (1): the measure of disorder. The increase in entropy is the decrease in information.

Entropy (2): the measure of the energy which is available for work.

Problem: Reconcile both definitions.

Some people tell me that there is no problem here… Yet… I have the feeling that we call entropy to many different things because we have the intuition that, in the end, they’re all the same. My main problem: entropy (1) is an epistemological magnitude, whilst entropy (2) is ontological. Confusion between these two planes have given rise to all sorts of problems.

I should explain better: entropy (1) refers to my knowledge of the world, and entropy (2) to its substance. Yet, we might be able to reconcile them. With care, of course. Let us give an example.

Imagine a box with particles bouncing inside. We have no information at all. All possible states are equally likely. With no information, there is no work we can extract from the particles in the box. But imagine that we’re given some information, such as the temperature. Then we can extract some work, if we’re clever. Now, even more: imagine that we’re given the exact position and velocity of all the particles at a given moment. Then, again if we’re clever, we can extract a lot of work from the system! The more information we have, the more work we can extract.

So that was a purely operational view on entropy. The information content –epistemological, entropy (1)– determines the amount of work we can get –entropy (2). But the ontological view fades away… The system has no intrinsic entropy. The amount of work which is available… available for whom?

Now a problem comes… the second law of thermodynamics, the most sacred of our laws of physics, states that the entropy of an isolated system tends to grow. “But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation”, as Arthur Eddington posed it.

Can the second law adapt itself to this view? Yes, it can, but the result is funny: For all observers, no matter their knowledge and their abilities, as time goes by, their information about an isolated physical system tends to reduce, and also the amount of work they can get from it.

Of course, isolated is key here. You’re supposed to do no more measurements at all! Then, evolution tends to decrease your information, increasing the initial uncertainties you might have. Is this statement non-trivial? I think it is, in the following sense: it excludes the possibility of some dynamical systems being physically realized.

Still, the operational point of view does not fully satisfy me yet. It states that, no matter how clever you are, the amount of work you can get from an isolated system decreases with time, since your information does. This maximization over the intelligence of people is disturbing… What do you think?

The temperature of a single configuration

One of the first things that we learn in thermodynamics is that temperature is the property of an ensemble, not of a single configuration. But is it true? Can we make sense of the idea of the temperature of a single configuration?

I became sure that a meaning could be given to that phrase when I read Kenneth Wilson’s article about the renormalization group in Scientific American long long back. There he gave three pics describing the state of a ferromagnet at low, critical and high temperature. He gave just a single pic for each state!! No probability distributions, no notions of ensemble. Just pictures, that looked like these ones:

Was Wilson wrong? No, he wasn’t! Black spins are down, and green ones are up. So, he wanted to show that, at low temperatures (left pic) you have large domains. At high temperatures (right pic), it is all random. And at the critical temperature, the situation becomes interesting: you have patches within patches, of all sizes… But that is another story, I may tell it some other day.

So, you see: Wilson’s pics make the point, so it is true that a single configuration can give the feeling for the temperature at which it was taken.

In statistical mechanics, each possible configuration C for a system has a certain probability, given by the Boltzmann factor:

p(C) \propto \exp(-E(C)/kT)

where E(C) is the energy of the configuration, T is the temperature and k is Boltzmann’s constant. The proportionality is a technical thing: probabilities have to be normalized. In terms of conditional probability, we can say that, given a temperature, we have a probability:

p(C|T) \propto \exp(-E(C)/kT)

which means: given that the temperature is T, the probability is such and such. Our question is, therefore, what is the probability for each temperature, given the configuration?

p(T|C)

Now, remember Bayes theorem? It says that you can reverse conditional probabilities:

p(A|B) p(B) = p(B|A) p(A)

So, we can say:

p(T|C) = p(C|T) p(T)/p(C)

Great, but… what does that mean? We need the a priori probability distribution for the temperatures and for the configurations. That’s a nice technical problem, which I leave now. But see my main point: given the temperature, you have a probability distribution for the configurations and, given the configuration, you have a probability distribution for the temperatures.

Of course, that distribution might be quite broad… Imagine that you have a certain system at a certain unknown temperature T. You get one configuration C_1 and, from there, try to estimate the probability. You will get a certain probability distribution P(T|C_1), presumably broad. OK, now get more configurations and iterate: P(T|C_1,C_2,C_3,\cdots). As you get more and more, your distribution should narrow down and you should finally get a delta peak on the right temp! So, you get a sort of visual thermometer…

The idea is in a very alpha release… so comments are very welcome and, if you get something nice and publish, please don’t forget where you got the idea! :)

(Note: I already made an entry of this here, but this one is explained better)