From Schrödinger’s cat to quantum computers (II)

From analog computers to quantum computers

In this second part of the talk, we move on to discuss quantum computers. They are fashionable, they’re cute. You can boast in bars that you work in quantum computers and you’ll get free beers. But… are they, really, more powerful than their classical counterparts? I have devised a way to explain what is the main difference, and why they might well be more powerful. And it uses an old and forgotten concept: that of analog computer.

What is an analog computer? It is a machine designed to solve a certain computational problem, using physics. For example, the Antikythera mechanism. It is a device which was found in a sunk Greek ship from the Hellenistic times. It has some gears which, when spinning, represented the motion of planets, and helped predict their positions in the sky. In a certain way, the astrolabe is also an analog computer.

The Antikythera mechanism, an early analogue computer.

But let us give a simpler example. Imagine that you must order a large set of numbers, from lowest to highest. You can design an “ordering computer”, in the following way: get some spaghetti and cut each of them to a length corresponding to one of the numbers. Then, you take the full bunch and hit the table with it, flattening its bottom. Now, you only have to pick the spaghetti in order: first, second, third…

Another cute example is the problem of finding the most distant cities in a roadmap. I give to you a list of cities, and the distances those which are linked by a road. An analog computer can be built with a long thread. We cut it to pieces representing the roads joining
each pair of cities, and them we tie them in a way resembling the full roadmap. The knots, of course, represent the cities. Now we pick the roadmap by one of the knots, and let the rest hang freely. We look at the knot which lies the lowest. Now we pick it, and let the rest hang freely. In a few iterations we converge to a cycle between to knots. Those are the most distant cities in the roadmap.

And another one! Consider a 2D map on which we are given the positions of a few cities. We are asked to design the road network of minimal length which joins them all. Sometimes it will be convenient to create crossroads in the middle of nowhere. This is called the Steiner tree problem, and it is considered a “hard” problem. There is a way to solve it fast with an analog computer: we get a board and put nails or long pins on it representing the cities. Now we immerse the full board into soapy water and take it out very slowly. A soap film will have developed between the nails, with some “crossroads”, joining all pins. If we do it slowly enough, the film will have the minimum energy, which means that the total roadmap will have the minimal length. This idea is important: we have converted our computational problem into one that Nature can solve by “minimizing the energy”.

Experiments by Dutta and coworkers,

Experiments by Dutta and coworkers,

Our paradigm problem to solve will be the spin glass problem. It sounds like a technical problem, but I have a very nice way to explain it: how to combine your goals in life and be happy. Well, we all want things which simply do not go together easily: work success, health, true love, going out with the buddies, children, etc. Let us represent each of those goals as a small circle, and give a weight to each of them. Now I draw “connection” lines among the goals, which can be positive, if they reinforce each other, or negative, if they are opposite. Each line has a “strength”. So, for example, “having children” is highly incompatible with “going out with the buddies”, but it is strongly compatible with “finding true love”. Which is the subset of those goals I should focus on in order to maximize my happiness?

Each node is a "target", blue links mean "compatible", and red means "incompatible".

Each node is a “target”, blue links mean “compatible”, and red means “incompatible”.

I will tell you how to solve this problem, which is truly hard, using an analog computer. We put atoms in each node, and we make “arrow up” mean “focus on this goal”, while “arrow down” will mean “leave it”. Now, experimental physicists, which are very clever guys, they know how to “lay the cables” so that the energy is minimized when the system represents the maximum happiness. Cool.

No, not so cool. The problem is the “false minima”. Imagine that you’re exploring the bottom of the ocean and you reach a very deep trench. How can we know that it is, indeed, the deepest one? Most of the time, the analog computer I just described will get stuck in the first trench it finds. And, believe me, there are many. It’s just the story of my life: I know I can do much better, but… I would have to change so many things, and intermediate situations are terrible. But today I feel brave, I really want to know how to be happy.

Quantum mechanics comes to our help. Remember that atoms are quantum, and that their arrows can point in some other directions, not just up or down. So I put my analog computer, made of real (quantum) atoms inside a giant magnet, which forces all spins to point rightwards. Remember: \left|\rightarrow\right> = \left|\uparrow\right> + \left|\downarrow\right>, so now each one has 50% probabilities of pointing up and pointing down, like Schrödinger’s cat. Maximal uncertainty. I know nothing about what to do. Let us lower slowly the power of the magnet. Nature always wants to minimize the energy, so we pass through some complex intermediate states, highly “entangled”, in which some atoms decide early which position to choose. They are the “easy atoms”, for which no competition with other atoms exist. The “difficult atoms” are the ones in which you have more doubts, and they stay in a “catty” state for longer time. When, finally, the power of the magnet has come to zero, all atoms must have made up their minds. We only have to read the solution, which is the optimal happiness one.

Sure? It all depends on the speed at which we have turned the magnet off. If I am greedy and do it fast, I will ruin the experiment, and reach any “false minimum” of the energy. So, a false maximum of the happiness. All this scheme is called “adiabatic quantum computation”. For physicists, “adiabatic” means “very slow”.

How slow should I go? Well, this is funny. Apparently, most of the time it’s not very important. But there is a critical moment, a certain “phase transition” point, when the entanglement between the atoms is maximal. Then, it is crucial to advance really slowly. As an analogy, think that you have to take a sleeping baby from the living room to his cradle. There is always a fatidic moment, when you have to open the damned doorknob. If you are unlucky, you may have more than one. But, for sure, at least you’ll have one.

And, what if we’re so clever that we have left all doors open? That’s what worries us physicists most. There is a conjecture, that there is some kind of “cosmic censorship”, that will impose a closed door in the path of every difficult problem. Nature might be evil, and has put obstacles to the possibility of solving difficult problems too fast. It would be a new limit: the unsurmountable speed of light, the unstoppable increment of entropy… and now that? It is worth to pay attention: the next years will be full of surprises.

This is the second part of a lecture I originally delivered in the streets of Madrid for the “Uni en la calle”, to protest the budget cuts in education and science in Spain, in March 9, 2013. And, later at a nice high school in Móstoles.

More on compressed numbers

Spoiler alert: we are going to discuss about the previous post, and we will give the solution to the puzzle…

So, the sequence to study was 2, 12, 1112, 3112, 132112… It is easy if you read it aloud: “2” is “one two”, so “12”, which in turn is “one one, one two”, “1112”, and so on.

Let us get abstract. The transformation that takes from one sequence to the next, e.g., from 12 to 1112,  is a kind of description, so we will denote it with the letter D. Transformation D is the most basic compression algorithm! Imagine that you have a sequence like “333333333”. Then, under the operation of D, it transforms to “93”, which is a neat compression. But the main lesson we get from here is that compression algorithms may expand if used without control… This trick  is widely known, it appears as a puzzle for computer science students, normally starting with number 1, so: 1, 11, 21, 1211, etc. I preferred to start with 2, so that you wouldn’t find it on the web… :)

But the properties of this operator are not studied out there, as far as I know… I have been thinking a bit, and today I just made a computer program to mess around… I got a few questions for you:

  • The sequence “22” is “self-descriptive”, in the sense that D(22)=22. Are there any other self-descriptive sequences?
  • How does the length of the sequence D^n(1) grow? I mean: each step is increasing the length of the sequence. Is this going to stop? Is it going to grow exponentially, or what? Why?
  • Can all numbers appear in the sequence D^n(1)? Why?
  • Let us call a sequence “primitive-1” if it comes from the expansion of a sequence of a single number, i.e.: all the D^n(k), for all n and k. Is there any property that will tell us when a sequence is primitive-1?

I have a few hints regarding these questions, but I don’t have the answer to all of them, so we’ll play together, if you want…