Entropy (1): the measure of disorder. The increase in entropy is the decrease in information.
Entropy (2): the measure of the energy which is available for work.
Problem: Reconcile both definitions.
Some people tell me that there is no problem here… Yet… I have the feeling that we call entropy to many different things because we have the intuition that, in the end, they’re all the same. My main problem: entropy (1) is an epistemological magnitude, whilst entropy (2) is ontological. Confusion between these two planes have given rise to all sorts of problems.
I should explain better: entropy (1) refers to my knowledge of the world, and entropy (2) to its substance. Yet, we might be able to reconcile them. With care, of course. Let us give an example.
Imagine a box with particles bouncing inside. We have no information at all. All possible states are equally likely. With no information, there is no work we can extract from the particles in the box. But imagine that we’re given some information, such as the temperature. Then we can extract some work, if we’re clever. Now, even more: imagine that we’re given the exact position and velocity of all the particles at a given moment. Then, again if we’re clever, we can extract a lot of work from the system! The more information we have, the more work we can extract.
So that was a purely operational view on entropy. The information content –epistemological, entropy (1)– determines the amount of work we can get –entropy (2). But the ontological view fades away… The system has no intrinsic entropy. The amount of work which is available… available for whom?
Now a problem comes… the second law of thermodynamics, the most sacred of our laws of physics, states that the entropy of an isolated system tends to grow. “But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation”, as Arthur Eddington posed it.
Can the second law adapt itself to this view? Yes, it can, but the result is funny: For all observers, no matter their knowledge and their abilities, as time goes by, their information about an isolated physical system tends to reduce, and also the amount of work they can get from it.
Of course, isolated is key here. You’re supposed to do no more measurements at all! Then, evolution tends to decrease your information, increasing the initial uncertainties you might have. Is this statement non-trivial? I think it is, in the following sense: it excludes the possibility of some dynamical systems being physically realized.
Still, the operational point of view does not fully satisfy me yet. It states that, no matter how clever you are, the amount of work you can get from an isolated system decreases with time, since your information does. This maximization over the intelligence of people is disturbing… What do you think?