Drunk Euclid still rules

On random metrics and the KPZ universality

Straight lines and circles, polygons and exact constructions. That is the clear and shiny paradise of Euclid and classical geometry. The paradigm of perfect connection between the mind and the reality. No need for awkward limiting processes, just unrefutable syllogisms. A cathedral of mathematics and, at the same time, the most successful
physical theory ever designed, since it is validated by the experiments millions of times a day. Yet, what happens when Euclid does not goes easy on ouzo? Can we still play such good geometry? Surprisingly, the answer is yes.

drankeuclid3When Euclid gets drunk, instead of the classical plane, we get a wiggly surface. Assume that you crumple your sheet of paper on which you have drawn some circles and straight lines. They will now look crumpled to you. But, I hear you say, there is nothing you can say about them any more, since you don’t know how the paper was crumpled!

OK, let us start easier. Let us assume that you have a surface or, for math jedis, a two-dimensional riemannian manifold. The key concept is that of distance (also called metric). Let us assume that you put some smart ants on the surface which can measure distances between pairs of points, and these distances are smooth and blah-blah. The role which circles used to play on the plane belongs now to balls which are the set of points which ants can reach from a fixed one in a given time. Balls can have all kinds of shapes, as you can see in the figure below.

rmetric1What about straight lines? They are now called geodesics, which are (ahem-ahem) the curves of shortest length between two points. Balls and geodesics fulfill some nice geometric relations, such as orthogonality (i.e. perpendicularity) according to the metric. The next figure shows the set of geodesics emanating from a point, along with the balls surrounding it, for a surface with the shape of an eggcrate.

exampleSo, what happens if we crumple the Euclidean plane, randomly (angrily, if needed)? Then the metric on the manifold becomes random. We will assume that wrinkles at one point of the paper sheet are independent of wrinkles at points far away. In formal terms, we say that the metric only has local correlations. Let us now draw balls around a given point.

rmetricNow the balls are rather rough. That is reasonable. Rough, but not crazily rough, just a bit. In fact, they look like balls drawn by a slightly drunk mathematician. Moreover, they have a certain appeal to it. Maybe they are fractal curves? Well, they are. And their dimension
is d=3/2, independently of how you crumple exactly your manifold, as long as you only study balls which are much larger than the distance above which wrinkles are correlated.

If you define the roughness W of a ball as the width of the ring which can contain it, then you get a pretty nice result: W\approx R^{1/3}. Again a very nice and simple exponent, which shows that drunk geometry hides many interesting secrets. Also the geodesics have fractal behavior, if you are wondering. But we have just scratched the surface of this ethylic paradise. Suffice it to say that we have found the pervasive (and mysterious) Tracy-Widom distribution when we measured the fluctuations on the local radius of the balls.

Earlier this year we published those results, along with S.N. Santalla, T. LaGatta and R. Cuerno. Don’t miss the paper, which has very nice pictures, and and even nicer introduction video, in which I crumple actual paper in front of the camera!

How come we get such nice results, independently of how we crumple the manifold? The magic lies in the notion of universality. Imagine that you have some theory defined by its small-scale behavior. When you look from very far away, some details are forgotten and some get even more relevant. This procedure of looking from further and further away is called renormalization. Typically, only a few details are relevant when you are really far from the smallest scale. And those few details make up the universality class.

So, balls in random manifolds share universality class with other phenomena… from physics! (Well, if you know of a clear boundary between physics and math, let me know.) Growth of amorphous crystals from a vapour, forest fires, cell colonies… Basically, growth phenomena in noisy conditions are known to belong to the Kardar-Parisi-Zhang (KPZ) class. They share the fractal dimension of the interfaces and the Tracy-Widom distribution for the fluctuations.

Why is Nature so kind to us? Why does she agree to forget most of the mumbo-jumbo from the small scale details, so as we can do physics and math on a large scale that makes any sense? Somehow this connects with one of the deepest questions in science, as posed by Eugene Wigner: “why is mathematics so effective in describing Nature?”

Advertisement

The temperature of a single configuration

One of the first things that we learn in thermodynamics is that temperature is the property of an ensemble, not of a single configuration. But is it true? Can we make sense of the idea of the temperature of a single configuration?

I became sure that a meaning could be given to that phrase when I read Kenneth Wilson’s article about the renormalization group in Scientific American long long back. There he gave three pics describing the state of a ferromagnet at low, critical and high temperature. He gave just a single pic for each state!! No probability distributions, no notions of ensemble. Just pictures, that looked like these ones:

Was Wilson wrong? No, he wasn’t! Black spins are down, and green ones are up. So, he wanted to show that, at low temperatures (left pic) you have large domains. At high temperatures (right pic), it is all random. And at the critical temperature, the situation becomes interesting: you have patches within patches, of all sizes… But that is another story, I may tell it some other day.

So, you see: Wilson’s pics make the point, so it is true that a single configuration can give the feeling for the temperature at which it was taken.

In statistical mechanics, each possible configuration C for a system has a certain probability, given by the Boltzmann factor:

p(C) \propto \exp(-E(C)/kT)

where E(C) is the energy of the configuration, T is the temperature and k is Boltzmann’s constant. The proportionality is a technical thing: probabilities have to be normalized. In terms of conditional probability, we can say that, given a temperature, we have a probability:

p(C|T) \propto \exp(-E(C)/kT)

which means: given that the temperature is T, the probability is such and such. Our question is, therefore, what is the probability for each temperature, given the configuration?

p(T|C)

Now, remember Bayes theorem? It says that you can reverse conditional probabilities:

p(A|B) p(B) = p(B|A) p(A)

So, we can say:

p(T|C) = p(C|T) p(T)/p(C)

Great, but… what does that mean? We need the a priori probability distribution for the temperatures and for the configurations. That’s a nice technical problem, which I leave now. But see my main point: given the temperature, you have a probability distribution for the configurations and, given the configuration, you have a probability distribution for the temperatures.

Of course, that distribution might be quite broad… Imagine that you have a certain system at a certain unknown temperature T. You get one configuration C_1 and, from there, try to estimate the probability. You will get a certain probability distribution P(T|C_1), presumably broad. OK, now get more configurations and iterate: P(T|C_1,C_2,C_3,\cdots). As you get more and more, your distribution should narrow down and you should finally get a delta peak on the right temp! So, you get a sort of visual thermometer…

The idea is in a very alpha release… so comments are very welcome and, if you get something nice and publish, please don’t forget where you got the idea! :)

(Note: I already made an entry of this here, but this one is explained better)