## Monday, June 19, 2017

### What can the physics of spin crystals tell us about how we cooperate?

In the natural world, cooperation is everywhere. You can see it among people, of course, but not everybody cooperates all the time. Some people, as I'm sure you've heard or experienced, don't really care for cooperation. Indeed, if cooperation were something that everybody does all the time, we wouldn't even talk about it: we'd take it for granted.

But we cannot take it for granted, and the main reason for this has to do with evolution. Grant me, for a moment, that cooperation is an inherited behavioral trait. This is not a novelty mind you: plenty of behavioral traits are inherited. You may not be completely aware that you yourself have such traits, but you sure do recognize them in animals, in particular courtship displays and all the complex rituals associated with them. So if a behavioral trait is inherited, it is very likely selected for because it enhances the organisms's fitness. But the moment you think about how cooperation as a trait may have evolved, you hit a snag. A problem, a dilemma.

If cooperation is a decision that promotes increased fitness if two (or more) individuals engage in it, it must be just as possible to not engage in it. (Remember, cooperation is only worth talking about if it is voluntary.) The problem arises when in a group of cooperators an individual decides not to cooperate. It becomes a problem because that individual still gets the benefit of all the other individuals cooperating with them, but without actually paying the cost of cooperation.  Obtaining a benefit without paying the cost means you get mo' money, and thus higher fitness. This is a problem because if this non-cooperation decision is an inherited trait just as cooperation is, well then the defector's kids (a defector is a non-cooperator) will do it too, and also leave more kids. And the longer this goes on, all the cooperators will have been wiped out and replaced by, well, defectors. In the parlance of evolutionary game theory, cooperation is an unstable trait that is vulnerable to infiltration by defectors. In the language of mathematics, defection--not cooperation--is the stable equilibrium fixed point (a Nash equilibrium). In the language of you and me: "What on Earth is going on here?"

Here's what's going on. Evolution does not look ahead. Evolution does not worry that "Oh, all your non-cooperating nonsense will bite you in the tush one of these days", because evolution rewards success now, not tomorrow. By that reasoning, there should not be any cooperating going on among people, animals, or microbes for that matter. Yet, of course, cooperation is rampant among people (most), animals (most), and microbes (many). How come?

The answer to this question is not simple, because nature is not simple. There are many different reasons why the naive expectation that evolution cannot give rise to cooperation is not what we observe today, and I can't here go into analyzing all of them here. Maybe one day I'll do a multi-part series (you know I'm not above that) and go into the many different ways evolution has "found a way". In the present setting, I'm going to go all "physics" with you instead, and show you that we can actually try to understand cooperation using the physics of magnetic materials. I kid you not.

Cooperation occurs between pairs of players, or groups of players. What I'm going to show you is how you can view both of these cases in terms of interactions between tiny magnets, which are called "spins" in physics. They are the microscopic (tiny) things that macroscopic (big) magnets are made out of. In theories of ferromagnetism, the magnetism is created by the alignment of electron spins in the domains of the magnet, as in the picture below.
 Fig. 1: Micrograph of the surface of a ferromagnetic material, showing the crystal "grains", which are areas of aligned spins (Source: Wikimedia).
If the temperature were exactly zero, then in principle all these domains could align to point in the same direction, so that the magnetization of the crystal would be maximal. But when the temperature is not zero (degrees Kelvin, that is), then the magnetization is less than maximal. As the temperature is increased, the magnetization of the crystal decreases, until it abruptly vanishes at the co-called "critical temperature". It would look something like the plot below.
 Fig. 2: Magnetization M of a ferromagnetic crystal as a function of temperature T (arbitrary units).
"That's all fine and dandy", I hear you mumble, "but what does this have to do with cooperation?" And before I have a chance to respond, you add: "And why would temperature have anything to do with how we cooperate? Do you become selfish when you get hot?"

All good questions, so let me answer them one at a time. First, let us look at a simpler situation, the "one-dimensional spin chain" (compared to the two-dimensional "spin-lattice"). In physics, when we try to solve a problem, we first try to solve the simplest and easiest version of the problem, and then we check whether the solution we came up with actually applies to the more complex and messier real world. A one-dimensional chain may look like this one:
 Fig. 3: A one-dimensional spin chain with periodic boundary condition
This chain has no beginning or end, so that we don't need to deal with, well, beginnings and ends. (We can do the same thing with a two-dimensional crystal: it then topologically becomes a torus.)

So what does this have to do with cooperation? Simply identify a spin-up with a cooperator, and a spin-down with a defector, and you get a one-dimensional group of cooperators and defectors:

C-C-C-D-D-D-D-C-C-C-D-D-D-C-D-C

Now, asking what the average fraction of C's vs. D's on this string is, becomes the same thing as asking what is the magnetization of the spin chain! All we need is to write down how the players in the chain interact. In physics, spins interact with their nearest neighbors, and there are three different values for "interaction energies", depending on how the spins are oriented. For example, you could write
$$E(\uparrow,\uparrow)=a, E(\uparrow,\downarrow)=E(\downarrow,\uparrow)=b, E(\downarrow,\downarrow)=c$$.
which you could also write into matrix form like so:
$$E=\begin{pmatrix} a & b\\ b& c\\ \end{pmatrix}$$
And funny enough, this is precisely how payoff matrices in evolutionary game theory are written! And because payoffs in game theory are translated into fitness, we can now see that the role of energy in physics is played by fitness in evolution. Except, as you may have noted immediately, that in physics the interactions lower the energy, while in evolution, Darwinian dynamics maximizes fitness. How can the two be reconciled?

It turns out that this is the easy part. If we replace all fitnesses by "energy=max_fitness minus fitness", then fitness maximization is turned into energy minimization. This can be achieved simply by taking a payoff matrix such as the one above, identifying the largest value in the matrix, and replacing all entries by "largest value minus entry". And in physics, an constant added (or subtracted) to all energies does not matter (remember when they told you in physics class that all energies are defined only in relation to some scale? That's what they meant by that.)

"But what about the temperature part? There is no temperature in game theory, is there?"

You're right, there isn't. But temperature in thermodynamics is really just a measure of how energy fluctuates (it's a bit more complicated, but let's leave it at that). And of course fitness, in evolutionary theory, is also not a constant. It can fluctuate (within any particular lineage) for a number of reasons. For example, in small populations the force that maximizes fitness (the equivalent of the energy-minimization principle) isn't very effective, and as a result the selected fitness will fluctuate (generally, decrease, via the process of genetic drift). Mutations also will lead to fitness fluctuations, so generally we can say that the rate at which fitness fluctuates due to different strengths of selection can be seen as equivalent to temperature in thermal physics.

One way to model the strength of selection in game theory is to replace the Darwinian "strategy inheritance" process (a successful strategy giving rise to successful "children-strategies") with a "strategy adoption" model, where a strategy can adopt the strategy of a competing individual with a certain probability. Temperature in such a model would simply quantify how likely it is that an individual will adopt an inferior strategy. And it turns out that "strategy adoption" and "strategy inheritance" give rise to very similar dynamics, so we can use strategy adoption to model evolution. And low and behold, the way the boundaries between groups of aligned spins change in magnetic crystals is precisely by the "spin adoption" model, also known as Glauber dynamics. This will become important later on.

OK, I realize this is all getting a bit dry. Let's just take a time-out, and look at cat pictures. After all, there is nothing that can't be improved by looking at cat pictures.  Here's one of my cat eyeing our goldfish:
 Fig. 4: An interaction between a non-cooperator with an unwitting subject
Needless to say, the subsequent interaction between the cat and the fish did not bode well for the future of this particular fish's lineage, but it should be said that because the fish was alone in its bowl, its fitness was zero regardless of the unfortunate future encounter.

After this interlude, before we forge ahead, let me summarize what we have learned.

1. Cooperation is difficult to understand as being a product of evolution because cooperation's benefits are delayed, and evolution rewards immediate gains (which favor defectors).

2. We can study cooperation by exploiting an interesting (and not entirely overlooked) analogy between the energy-minimization principle of physics, and the fitness-maximizing principle of evolution.

3. Cooperation in groups with spatial structure can be studied in one dimension. Evolutionary game theory between players can be viewed as the interaction of spins in a one-dimensional chain.

4. The spin chain "evolves" when spins "adopt" an alternative state (as if mutated) if the new state lowers the energy/increases the fitness, on average.

All right, let's go a-calculating! But let's start small. (This is how you begin in theoretical physics, always). Can we solve the lowly Prisoner's Dilemma?

What's the Prisoner's Dilemma, you ask? Why, it's only the most famous game in the literature of evolutionary game theory! It has a satisfyingly conspiratorial name, with an open-ended unfolding. Who are these prisoners? What's their dilemma? I wrote about this game before here, but to be self-contained I'll describe it again.

Let us imagine that a crime has been committed by a pair of hoodlums. It is a crime somewhere between petty and serious, and if caught in flagrante, the penalty is steep (but not devastating). Say, five years in the slammer. But let us imagine that the two conspires were caught fleeing the scene independently, leaving the law-enforcement professionals puzzled. "Which of the two is the perp?", they wonder. They cannot convene a grand jury because each of the alleged bandits could say that it was the other who committed the deed, creating reasonable doubt. So each of the suspects is questioned separately, and the interrogator offers each the same deal: "If you tell us it was the other guy, I'll slap you with a charge of being in the wrong place at the wrong time, and you get off with time served. But if you stay mum, we'll put the screws on you." The honorable thing is, of course, not to rat out your compadre, because they will each get a lesser sentence if the authorities cannot pin the deed on an individual. But they also must fear being had: having a noble sentiment can land you behind bars for five years with your former friend dancing in the streets. Staying silent is a "cooperating" move, ratting out is "defection", because of the temptation to defect. The rational solution in this game is indeed to defect and rat out, even though for this move each player gets a sentence that is larger than if they both cooperated. But it is the "correct" move. And herein lies the dilemma.

A typical way to describe the costs and benefits in this game is in terms of a payoff matrix:
Here, b is the benefit you get for cooperation, and c is the cost. If both players cooperate, the "row player" receives b-c, as does the "column" player. If the row player cooperates but the column player defects, the row-player pays the cost but does not reap the reward, for a net -c. If the tables are reversed, the row player gets b, but does not pay the cost at they just defected. If both defect, they each get zero. So you see that the matrix only lists the reward for the row player (but the payoff for the column player is evident from inspection).

We can now use this matrix to calculate the mean "magnetization" of a one-dimensional chain of Cs and Ds, by pretending that ${\rm C}=\uparrow$ and ${\rm D}=\downarrow$ (the opposite identification would work just as well). In thermal physics, we calculate this magnetization as a function of temperature, but I'm not going to show you in detail how to do this. You can look it up in the paper that I'm going to link to at the end. Yes I know, you are so very surprised that there is a paper attached to the blog post. Or a blog post attached to the paper. Whatever.

Let me show you what this fraction of cooperators (or magnetization of the spin crystal) looks like:
 Fig. 5: "Magnetization" of a 1D chain, or fraction of cooperators, as a function of the net payoff $r=b-c$, for three different temperatures.
You notice immediately that the magnetization is always negative, which here means that there are always more defectors than there are cooperators. The dilemma is immediately obvious: as you increase $r$, meaning that there is increasingly more benefit than cost), the fraction of defectors actually increases. When the net payoff increases for cooperation, you would expect that there would be more cooperation, not less. But the temptation to defect increases also, and so defection becomes more and more rational.

Of course, none of these findings are new. But it is the first time that the dilemma of cooperation was mapped to the thermodynamics of spin crystals. Can this analogy be expanded, so that the techniques of physics can actually give new results?

Let's try a game that's a tad more sophisticated: the Public Goods game. This game is very similar to the Prisoner's Dilemma, but it is played by three or more players. (When played by two players, it is the same as the Prisoner's Dilemma). The idea of this game is also simple. Each player in the group (say, for simplicity, three) can either pay into a "pot" (the Public Good), or not. Paying means cooperating, and not paying (obviously) is defection. After this, the total Public Good is multiplied by a parameter that is larger than 1 (we will call it $r$ here also), which you can think of as a synergy effect stemming from the investment, and the result is then equally divided to all players in the group, regardless of whether they paid in or not.

Cooperation can be very lucrative: if all players in the group pay in one and the synergy factor $r=2$, then each gets back two (the pot has grown to six from being multiplied by two, and those six are evenly divided to all three players). This means one hundred percent ROI (return on investment). That's fantastic! Trouble is, there's a dilemma. Suppose Joe Cheapskate does not pay in. Now the pot is 2, multiplied by 2 is 4. In this case each player receives 1 and 1/3 back, which is still an ROI of 33 percent for the cooperators, not bad. But check out Joe: he paid in nothing and got 1.33 back. His ROI is infinite. If you translate earnings into offspring, who do you think will win the battle of fecundity? The cooperators will die out, and this is precisely what you observe when you run the experiment. As in the Prisoner's Dilemma, defection is the rational choice. I can show this to you by simulating the game in one dimension again. Now, a player interacts with its two nearest neighbors to the left and right:
The payoff matrix is different from that of the Prisoner's Dilemma, of course. In the simulation, we use "Glauber dynamics" to update a strategy. (Remember when I warned that this was going to be important?) The strength of selection is inversely proportional to what we would call temperature, and this is quite intuitive: if the temperature is high, then changes are so fast and random that selection is very ineffective because temperature is larger than most fitness differences. If the temperature is small, then tiny differences in fitness are clearly "visible" to evolution, and will be exploited.

The simulations show that (as opposed to the Prisoner's Dilemma) cooperation can be achieved in this game, as long as the synergy factor $r$ is larger than the group size:
 Fig. 6: Fraction of cooperators in a computational simulation of the Public Goods game in one dimension. Here T is the inverse of the selection strength. As $T\to0$, the change from defection to cooperation becomes more and more abrupt. There are error bars, but they are too small to be seen.
This graph shows that there is an abrupt change from defection to cooperation as the synergy factor is increased, and this change becomes more and more abrupt the smaller the "temperature", that is, the larger the strength of selection. This behavior is exactly what you would expect in a phase transition at a critical $r=3$, so it looks that this game also should be describable by thermodynamics.

Quick aside here. If you just said to yourself "Wait a minute, there are no phase transitions in one dimension" because you know van Hove's theorem, you should immediately stop reading this blog and skip right to the paper (link below) because you are in the wrong place: you do not need this blog. If, on the other hand, you read "van Hove" and thought "Who?", then please keep on reading. It's OK. Almost nobody knows this theorem.

Alright, I said we were going to do the physics now. I won't show you how exactly, of course. There may not be enough cat pictures on the Internet to get you to follow this. <Checks>. Actually, I take that back. YouTube alone has enough. But it would still take too long, so let's just skip right to the result.

I derive the mean fraction of cooperators as the mean magnetization of the spin chain, which I write as $\langle J_z\rangle_\beta$. This looks odd to you because none of these symbols have been defined here. The J refers to a the spin operator in physics, and the z refers to the z-component of that operator. The spins you have seen here all point either up or down, which just means $J_z$ is minus one or plus one here. The $\beta$ is a common abbreviation in physics for the inverse temperature, that is, $\beta=1/t$. And the angled brackets just mean "average".  So the symbol $\langle J_z\rangle_\beta$ is just reminding you that I'm not calculating average fraction of cooperators. I am calculating the magnetization of a spin chain at finite temperature, which is the average number of spins-up minus spins-down. And I did all this by converting the payoff matrix into a suitable Hamiltonian, which is really just an energy function.

Mathematically, the result turns out to be surprisingly simple:
$$\langle J_z\rangle=\tanh[\frac\beta2(r/3-1)] \ \ \ (1)$$
Here $\beta$ is just the inverse temperature, that is, $\beta=1/T$. Let's plot the formula, to check how this compares to simulating game theory on a computer:
 Fig. 7: The above formula, plotted against $r$ for the different inverse temperatures $\beta$.

OK, let's put them side-by-side, the simulation, and the theory:
You'll notice that they are not exactly the same, but they are very close. Keep in mind that the theory assumes (essentially) an infinite population. The simulation has a finite population (1,024 players), and I show the average of 100 independent replicate simulations, that ran for 2 million updates, meaning that each of the sites of the chain was updated about 2,000 times each.

Even though they are so similar, how they were obtained could hardly be more different. The set of curves on the left was obtained by updating "actual" strings many many times, and recording the fraction of Cs and Ds on them after doing this 2 million times. (This, as any computational simulation you see in this post, was done by my collaborator on this project, Arend Hintze).  To obtain the curve on the right, I just used a pencil, paper, and an eraser. It shows off the power of theory, because once you have a closed-form solution such as Eq. (1) above, not only does this solution tell you some important things, but you can now imagine using the formalism to do all the other things that are usually done in spin physics, and that we never would have thought of doing if all we did was simulate the process.

And that's exactly what Arend Hintze and I did: we looked for more analogies with magnetic materials, and whether they can teach you about the emergence of cooperation. But before I show you one of them, I will mercifully throw in some more cat pictures. This is my other cat, the younger one. She is in a box, and no, Schrödinger had nothing to do with it. Cats just like to sit in boxes. They really do.
 Our cat Alex has appropriated the German Adventskalendar house
All right, enough with the cat entertainment. Let's get back to the story. Arend and I had some evidence from a previous paper [1] that this barrier to cooperation (namely, that the synergy has to be at last as large as the group size) can be lowered if defectors can be punished (by other players) for defecting. That punishment, it turns out, is mostly meted out by other cooperators, because being a defector and a punisher at the same time turns out to be an exceedingly bad strategy. I'm honestly not making a political commentary here. Honest. OK, almost honest.

And thinking about punishment as an "incentive to align", we wondered (seeing the analogy between the battle between cooperators and defectors, and the thermodynamics of low-dimensional spin systems) whether punishment could be viewed like a magnetic field that attempts to align spins in a preferred direction.

And that turned out to be true. I will spare you again the  technical part of the story (which is indeed significantly more technical), but I'll show you the side-by-side of the simulation and the theory. In those plots, I show you only one temperature $T=0.2$, that is $\beta=5$. But I show three different fines, meaning punishments with different strength of effect, here labelled as $\epsilon$. The higher $\epsilon$, the higher the "pain" of punishment on the defector (measured in terms of reduced payoff).

When we did the simulations, we also included a parameter that is the cost of punishing others. Indeed, doing so subtracts from a cooperator' net payoff: you should not be able to punish others without suffering a little bit yourself. (Again, I'm not being political here.) But we saw little effect of cost on the results, while the effect of punishment really mattered. When I derived the formula for the magnetization as a function of the cost of punishment $\gamma$ and the effect of punishment $\epsilon$, I found:
$$\langle J_z\rangle=\frac{1-\cosh^2(\beta\frac\epsilon4)e^{-\beta(\frac r3+\frac\epsilon2-1)}}{1+\cosh^2(\beta\frac\epsilon4)e^{-\beta(\frac r3+\frac\epsilon2-1)}} \ \ \ (2)$$
Keep in mind, I don't expect you to nod knowingly when you see that formula. What I want you to notice is that there is no $\gamma$ there. But I can assure you, it was there during the calculation, but during the very last steps it miraculously cancelled out of the final equation, leaving a much simpler expression than the one that I had carried through from the beginning.

And that, dear reader, who has endured for so long, being propped up and carried along by cat pictures no less, is the main message I want to convey. Mathematics is a set of tools that can help you keep track of things. Maybe a smarter version of me could have realized all along that the cost of punishment $\gamma$ will not play a role, and math would have been unnecessary. But I needed the math to tell me that (the simulations had hinted at that, but it was not conclusive).

Oh, I now realize that I never showed you the comparison between simulation and theory in the presence of punishment (aka, the magnetic field). Here it is (simulation on the left, theory on the right:

So what is our take-home message here? There are many, actually. A simple one tells you that to evolve cooperation in populations, you need some enabling mechanisms to overcome the dilemma. Yes, a synergy larger than the group size will get you cooperation, but this is achieved by eliminating the dilemma, because when the synergy is that high, not contributing actually hurts your bottom line. Here the enabling mechanism is punishment, but we need to keep in mind that punishment is only possible if you can distinguish cooperators from defectors (lest you punish indiscriminately). This ability is tantamount to the communication of one bit of information, which is the enabling factor I previously wrote about when discussing the Prisoner's Dilemma with communication.

A less simple message is that while computational simulations are a fantastic tool to go beyond mathematics--to go where mathematics alone cannot go [3]--new ideas can open up new directions that will open up new paths that we thought could only be pursued with the help of computers. Mathematics (and physics) thus still has some surprises to deliver to us, and Arend and I are hot on the trail of others. Stay tuned!

PS: I will updated the reference to the article [2] to the published version once the link is available.

References

[1] A. Hintze and C. Adami, Punishment in Public Goods games leads to meta-stable phase transitions and hysteresis, Physical Biology 12 (2005) 046005.
[2] C. Adami and A. Hintze, Thermodynamics of evolutionary games. ArXiv (2017)
[3] C. Adami, J. Schossau, and A. Hintze, Evolutionary game theory using agent-based methods, Phys. Life Reviews. 19 (2016) 38-42.

## Monday, January 9, 2017

### Are quanta particles or waves?

The title of this post is an age-old question isn't it? Particle or wave? Wave or particle? Many have rightly argued that the so-called "wave-particle duality" is at the very heart of quantum weirdness, and hence, of all of quantum mechanics. Einstein said it. Bohr said it. Feynman said it. Two out of those three are physics heroes of mine, so that's a majority right there.

Feynman, when talking about what we now call the wave-particle duality, was referring to the famous "double-slit experiment". He wrote (in his famous Feynman Lectures, Chapter 37 of Volume 1, to be precise):
 Richard Feynman (1918-1988) Source: Wikimedia
"We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by “explaining” how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics."
So what is Feynman talking about here? Instead of launching on a lengthy exposition of the double-slit experiment, as luck would have it I've already done that, in a blog post about the quantum eraser. That post, incidentally, was No. 6 in the "Quantum measurement" series that starts here. You don't necessarily have to have read all those posts to follow this one, but believe me, it would help a lot. At the minimum, start at No. 6 if you're not already familiar with the double-slit experiment. But you'll get a succinct introduction to the double-slit experiment below anyway.

Alright, back to quantum mechanics. Actually, step back a little bit more, to classical mechanics. In classical physics, there is no duality between waves and particles. Waves are waves, and they would never behave like particles. For example, you can't kick a wave, really, no matter what the surfer types tell you. Particles on the other hand, do not interfere with each other as waves do. You can kick particles (kinda), and you can count them. You can't count waves.

What Bohr, Einstein, and Feynman are trying to tell you is that in quantum mechanics (meaning the real world, because as I have told you before, classical mechanics is an illusion, it does not exist) the same stuff can be either particle OR wave. Not both, mind you. Here's what Einstein said about this, and to tell you the truth, this statement sounds like he's been hanging out with Bohr far too much:
 A. Einstein (1879-1955) Source: Wikimedia

"I
t seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do".
I've used a picture of Einstein in 1904 here, because you've seen far too many pics of him sticking out his tongue and hair disheveled. He wasn't like that most of the time when he made his most important contributions.

Lest you think that the troubles these 20th century physicists had with quantum mechanics is the stuff of history, think again. In 2012, a mere 5 years ago, experimenters from Germany (in the lab of the very eminent Wolfgang Schleich) claimed that they had collected evidence that a quantum system can be both particle and wave at the same time. Such an observation-if true-would run afoul of Bohr's "duality principle", which declared that a quantum system can only be one or the other, depending on the type of experiment used to examine the system. One or the other, but never both

Rest assured though, analyzing results of the Schleich experiment in a different way reveals that all is well with complementarity after all, as was pointed out by a team at the University of Ottawa, led by the equally eminent Robert Boyd. (You can read an excellent summary of that controversy in Tom Siegfried's piece here.) What all this fighting about duality should teach you is that this is not at all a solved problem. As recently as a few days ago, Steven Weinberg (who, full disclosure, has also been in my pantheon of physicists ever after I read his "First Three Minutes" at a very tender age) wrote about the particle-wave duality in the New York Review of Books. I hope that he reads this post, because it may alleviate some of his troubles.

In this piece, entitled "The Trouble with Quantum Mechanics", Weinberg admits to being as puzzled as his predecessors Einstein, Bohr, and Feynman, about the true nature of quantum physics. How can we understand, he muses, that quantum dynamics is governed by a deterministic equation (the Schrödinger equation), yet when we try to measure something, then all we can muster is probabilities? "So we still have to ask", Weinberg writes, "how do probabilities get into quantum mechanics?"

How indeed. You know of course, from reading my diatribes, that this is a question I am interested in myself. I have obliquely hinted that I think I know where the probabilities are coming from (if you can find the relevant post) and that one day I'll write a detailed account of that idea (it's 3/4 written already, actually). But today is not that day. Having convinced you that the particle-wave duality is still a very hot topic in quantum physics, let me take on that particular subject first.

What I want to do in this blog post is to make you think differently about the complementarity principle. What I'm going to tell you is that you should stop thinking in terms of "particle or wave". It is a false dichotomy. It is a false dilemma because quantum systems are neither particle nor wave. Those two are classical concepts, after all. Strictly speaking, quantum systems are quantum fields. But this is not the time to delve into quantum field theory, so instead I will try to marshal the tools of quantum information theory to tell you what is really complementary in quantum measurement, what it is that you can have "only one of", and what it is that is being "traded-off". You don't exchange a bit of particle for a bit of wave, this much I can tell you right here.

To do this, I have to introduce you to some very counter-intuitive quantum stuff. Now, you might argue: "All quantum stuff is counter-intuitive", and I'd have to agree with you if all your intuition is classical. What I am going to tell you is stuff that even baffles seasoned quantum physicists. I'm going to tell you about quantum experiments where the "nature" of the quantum experiment that you perform can be changed after you've already completed the experiment!

Let me remind you right here, that the--also very eminent--Niels Bohr tried to teach us that whether a quantum system appears as a particle or as a wave depends on the type of experiment you subject it to. Here I'm telling you that this is a bunch of hogwash, because I'll show you that when you do an experiment, you can change whether it is a "particle"- or a "wave"-experiment long after the data have been collected!

I know you're not shocked at my dissing Bohr as I have a habit of doing so. But I'm in good company, by the way, if you read what Feynman wrote about Bohr in his "Surely You're Joking" series.

"Alright I bite", one of you readers exclaimed just now, "how do you retroactively change the type of experiment you make?"

Glad you asked. Because now I can talk about John Archibald Wheeler. Wheeler was not a conventional physicist: Even though his early career as a nuclear physicist led to several important contributions to the Manhattan project, he was also interested in many other areas of physics. Indeed, he was a central figure in the "revival" of general relativity theory. (That theory had gone a bit out of fashion when people realized that many predictions of the theory were difficult to measure.) Wheeler co-authored what many (including myself) think is the best book on the topic: "Gravitation" (with Charles Misner and Kip Thorne). That book is often just referred to as "MTW".

 John Archibald Wheeler (1911-2008). Source: University of Texas
I never got to meet Wheeler, perhaps because I entered the field of quantum gravity too late. While Wheeler has been influential in the field of quantum information, it really was his gravity work that had the most lasting impact. He invented the terms "black hole" and "wormhole", after all. His most influential contribution to quantum information science is, undoubtedly, the "delayed choice" gedankenexperiment. Let me explain that to you.

Wheeler's thought experiment examines the question of whether a photon, say, takes on wave or particle nature before it interacts with the experiment, sensing (in a way) what kind of experiment is going to be performed on it. In the simplest version of the delayed choice experiment, the nature of the experiment would be changed "after the photon had made up its mind" whether it was going to play the role of particle, or whether it would make an appearance as a wave. Needless to say, this is of course not how quantum mechanics works, and Wheeler was fully aware of it. His interpretation was that a photon is neither wave nor particle, and that it takes on one of the two "coats" only when it is being observed. I'm going to tell you that I agree with the first part (the photon is neither wave nor particle), but I disagree with the second part: it does not in fact take on either particle or wave nature after it is observed. It never ever takes on such a role.

If you think about it, the idea that a system only "comes into being by being observed" is preposterous (however, such a thought was quite in line with some other of Wheeler's philosophies). Measurements are interactions with other systems just as much as any other interactions are: there is nothing special about measurement. This is, in essence, what I'm going to try to convince you of.

Even though the reasoning behind the delayed-choice experiment is preposterous, it has generated an enormous amount of work. Let's first look at how we may set up such an experiment. Below is an illustration of a double-slit experiment from Feynman's famous lecture, where he replaced photons by electrons shot out of an electron gun (such devices are perfectly reasonable and feasible). Note that Caltech, where Feynman spent the majority of his career, has made these lectures freely available. The particular chapter can be accessed here

 Fig. 1: An interference experiment with electrons. (Source: Feynman Lectures on Physics)
Later on, we're going to be using photons instead of electrons for the quantum system, because experiments are much easier with photon beams as opposed to electron beams.  In that case, we are going to assume that any light is going to be so faint that it can't be thought of as the classical light waves that give rise to Young's interference fringes. Then, at any point in time, there will be at most one photon between the double-slit and the detector, so you have to think about single photons either taking one or the other, or both paths, through the double-slit experiment.

Quantum mechanics predicts that a single electron takes both paths to create the interference pattern in the figure above at (c). Thus, it must somehow interfere with itself, which is difficult to imagine if you think of the electron as a particle. (Which of course it is not). Can we force it to behave as a particle? Suppose you put a particle detector between the wall and the backstop: one behind slit 1, and one behind slit 2. If you get a "hit" on either detector, then you know which path the electron travelled. (You can do this experiment without actually removing the electron, so that you can still get patterns on the screen.) When you obtain this "which-path" information, the interference pattern disappears: you've forced the electron to behave as a particle.

Wheeler's idea was this: Suppose the distance between the wall and the backstop is very, very large. If you do not put the contraption that will measure which path the electron took (the "which-path detector") into the experiment, the electron would have no choice but to go along both paths, ready to interfere with itself and create the interference pattern on the screen. But suppose you bring in the "which-path" apparatus after the electron has passed the slit, but before it is going to hit the screen. Is the electron wave function that is on the "other path" going to "change its mind", or go backwards? What would happen? The thought experiment very nicely illustrates how preposterous the idea is that the experiment itself determines "what the quantum system is", as changing the experiment mid-flight cannot possibly change the nature of the electron.

The experiment I'm going to describe to you (the delayed-choice quantum eraser experiment) has in fact been carried out several times now, and drives Wheeler's idea to the extreme. The choice of experiment (insert the "which-path" detector or not), can be made after the electron has hit the screen! If you are a reader for whom this is immediately obvious, then congratulations (and consider a career in quantum physics, if this is not already your career). It is indeed completely obvious if you understand quantum mechanics, but let me walk you through it anyway.

First, if it was the experiment that determines the nature of the quantum system (particle or wave), how can you change the experiment after it already has occurred? That this is possible is also due to the peculiarities of quantum physics, and it is also the hardest to explain. I'll do it with photons rather than electrons, as this is the experiment that was carried out, and it is also the description I used in the paper that I'm really writing about. You knew this was coming, didn't you?

We can do double-slit experiments with photons just as with electrons: we just have to turn down the intensity of light such that individual photons can be registered on a phosphorescent screen. When you see the screen light up at a particular spot (or, in more modern times, a pixel on a CCD detector lights up), you interpret it that a photon has hit there. Often, the double-slit is replaced by a Mach-Zehnder interferometer, but you shouldn't worry about such technicalities: you can in fact use either.

To pull off this feat of changing the experiment after the fact, you have to create an entangled pair of photons first. You already know what an entangled pair (a "Bell-state") is, because I wrote about it several times: for example in the context of black holes here, and in the context of quantum teleportation and superdense coding here. This pair of photons is also sometimes called an Einstein-Podolsky-Rosen (EPR) pair, because that trio first described a similar entangled state in a very famous paper in 1935.

Let's create such a pair by entangling the "polarization" degree of freedom of the photon. This is the part that is a bit more complicated: to understand it, you have to understand polarization.

Every photon can come in two different polarization states, but what these states are depends on how you decide to measure them. This will be crucial, because this is in fact how you change the measurement after the fact. The thing to know about an entangled pair is that it is in a superposition of those two states. Suppose we use as basis for the photon polarization the "horizontal/vertical" basis. That means that if a photon is polarized horizontally, and you put a filter in front of it that only allows vertical polarization to go through, then out comes nothing. Polarization is, if you will, a photon's way of wiggling. Below is a picture which shows the photon wiggling in the "vertical" and in the horizontal way. But they can also wiggle in the "circular-left" and "circular-right" way. In fact, it can wiggle in an infinite number of "opposing ways", and these are related to each other by a unitary transformation.

 Fig. 2: One way of depicting photon polarization.
The way a photon is polarized can be changed by an optical element (a "wave plate"), and this ability will be key in the experiment. Suppose we begin with a pair of photons A and B in a Bell-state, written in terms of the horizontal $|h\rangle$ and vertical $|v\rangle$ polarization eigenstates:

$|\Psi\rangle_{AB}=\frac1{\sqrt2}(|h\rangle_A|v\rangle_B+|v\rangle_A|h\rangle_B)$          (1)

You notice that neither of the photons has a defined state, but if I measure one of them (say A) and find that my detector says it is in an $|h\rangle$ state, then I can be sure that measuring B will give you "v", no matter whether you do the measurement now, or a year later with a detector placed a light year away. This is precisely what Einstein could not stomach, calling this mysterious bond "spooky action at a distance", but a careful analysis reveals that there is no "action" at all: signals cannot be sent using this bond.

But here's the thing: I can measure photon B either in the $h,v$ coordinate system, or in another one. This will become crucial, so keep this in mind. But for the moment let's forget that a "copy" of photon A (the entangled partner) is flying out there, possibly to a measurement device a light-year away. Actually, there is nothing a light year away from us, so let's say we are far in the future and the detector is on Proxima Centauri, about 4 and a quarter light years away. It'll just be a longer experiment.

Photon A now goes through a double-slit, just as the electrons in Figure 1. Now we'll do the "are you a particle or a wave" measurement. We do this by putting so-called "quarter-wave plates" in the path of the photons. When you do this, you entangle the polarization of the photon with the spatial degree of freedom (namely "left slit" or "right slit"). Once you've done this, you only have to measure the polarization of photon A to know whether it went through the left or right slit. In a way, you've tagged the photon's path by the polarization. After doing this, you will lose the interference pattern. You can either have an interference pattern (and we say that the photon wavefunction is "coherent"), or you can have "which-path" information, which makes the wavefunction incoherent. Or so people thought for a long time. It turns out that you can also have a a little bit of both, but you can't have both full which-path information, and full coherence: there is a tradeoff. And that tradeoff depends on the angle by which you rotate the polarization basis. In the description above, we used "quarter-wave" plates, which give you full information, and zero coherence. Choose something other than 45 degrees (that's the quarter wave), and you can get a little bit of both.

It turns out that there is a simple relationship that quantifies this tradeoff in terms of the angle you choose to do the tagging with. Let's call this angle $\phi$. We can then define the "distinguishability" $D$ and the "visibility" $V$, where $D^2$ measures how well you can distinguish the photon paths (a measure of which-path information), while $V^2$ quantifies the visibility of the interference fringes (a measure of the coherence of the wavefunction). A celebrated inequality (due to Greenberger and Yasin [1]) states that
$D^2+V^2\leq1$     (2)

Now, according to what I just wrote, choosing the angle of the wave plate when performing the which-path entangling operation chooses the experiment for you: Set it at 0 degree and you do not entangle at all, so that no which-path information is obtained (then $D^2=0$ and $V^2=1$). Set it at $\phi=\pi/4$, and you get perfect which-path information, and no visibility. How can you choose the experiment after the fact, when you have to choose the angle when setting up the experiment? How?

So the following is what makes quantum mechanics so beautiful. You can actually do this because when I described the experiment to you, I did not (it turns out) use an entangled EPR pair as the input, I used a photon in a defined polarization state, such as $|h\rangle$. I did not tell you about this because it would have confused you. I needed you to understand how to extract which-path information first, and how doing it gradually will gradually destroy coherence.

Now take a deep breath, and read very slowly.

If the input to the two-slits (and therefore to the "which-path" detector that entangles polarization and path) is the EPR state Eq. (1), you actually do not get any which-path information using the quarter-wave plate. This is because when the photon "comes in", it is not in a defined polarization state. If it was not in a defined state, you extract nothing. So for that setup, $V^2=1$ even though $\phi=\pi/4$.

Now one more deep breath after you digested this bit. Maybe take two, just to be safe.

Whether the state that comes in to the two slits is indeed Eq. (1) is up to the person at Proxima Centauri, a year after that data was recorded on the CCD screen on Earth.  This is because of what is $|h\rangle$ and what is $|v\rangle$ is determined by how you measure it. A quantum system does not have a state until you say how you measure it. It will be in the $h,v$ basis if that is the basis of your measurement device. It will be in the $R,L$  (right-circular, left-circular) basis, if that is instead what you will choose to examine it with. Or it could be anything in between.

I wrote about this at length in the blog post about the collapse of the wavefunction, within the "On quantum measurement" series. (Rightfully, the present post really should be "On quantum measurement. Part 8, but I decided to make it stand alone). Please go back to that if the two breaths did not help. There is also an intriguing parallel to how Shannon entropy is not defined until you determine how you will be measuring it, as I wrote about in "What is Information-Part 1".  The deeper reason for this is that all of physics is about the relative state of measurement devices. Mark my words.

The reason our person at Proxima Centauri handling photon B actually prepares the state is because photon A is not "projected" at any point of the experiment. This could be done, of course, but that is a different experiment. So now we can see how the delayed-choice experiment works: If Proxima Centauri person (PCP, for short) measures at an angle $\theta=0$ with respect to the preparation Eq. (2), then the photon is in a defined state (no matter whether the outcome is $h$ or $v$) and only then do you actually extract which-path information. In that case, visibility $V^2=0$. If PCP measures at $\theta=\pi/4$ on the other hand, the entanglement operation (the "tagging") does not work: it is as if the measurement by PCP "erased" the tagging, and $V^2=1$ instead. So indeed, a measurement far in the future (well, here more than four years in the future) will determine what kind of an experiment is done on the photon. The event far in the future will determine whether the photon appeared as a particle, or a wave. Weird, right?

What is that you ask? How can an event far in the future affect the data that are stored on a device far in the past?

I didn't say it did, did I? Of course it does not. The truth is much more magical. Without going into all the details here (but which you can read about in any paper about the Bell-state quantum eraser, or indeed my own paper referenced below), the result of the measurement by PCP in the future contains crucial information about how to decode the data in the past, information that is akin to the key in a cryptography procedure.

Yes, cryptographic. That is indeed what I wrote. You will only be able to decipher $D^2$ and $V^2$ when the measurement in the future (which is really a state preparation in the past) is available to you. That is the true magic of quantum mechanics. Without it, you won't be able to see any fringes in the data. But with it, you may be able to reconstruct them to full visibility, if that is how the photon was measured at Proxima Centauri.

How do I know any of this is true? Because we (my student Jennifer Glick and I) analyzed the entire experiment in terms of quantum information theory, and ultimately were able to write down the equations that describe discrimination and visibility (coherence) entirely in terms of entropies and information, in [2] (Jennifer did all the calculations and wrote the first draft of the manuscript). Clearly, "which-path information" should have an obvious information-theoretic rendering, but it turns out that this is actually a little bit tricky because it really is a "conditional information". But it turns out that "coherence" (or "visibility") can also be measured information-theoretically. And lo and behold, the two are related. In our description, they are related by a common information-theoretic identity: the chain rule for entropies. According to that identity, information $I$ and coherence $C$ (as a function of the PCP angle $\theta$) are related so that
$I(\theta)+C(\theta)=1$        (3) .
In a simple qubit model, the information and coherence take on extremely simple forms, namely $I(\theta)=H[\sin^2(\theta+\pi/4)]$ with $C(\theta)=1-H[\sin^2(\theta+\pi/4)]$, where $H[p]$ is the standard Shannon entropy function $H[p]=-p\log(p)-(1-p)\log(1-p)$. And take a look at how our information-theoretic quantities compare to the quantum optical measures of discrimination and visibility in Fig. 3 below. It almost looks like that discrimination and visibility (coherence) should have been defined information-theoretically from the outset, doesn't it?
 Fig.3: Top: Which-path information (solid line) and coherence (dashed line) in terms of quantum information theory. Bottom: Discrimination (solid) and visibility (dashed)  in quantum optics. $Q$ refers to the quantum state at the beam-splitter, and $D_A$  and $D_B$ refer to polarization detectors. From [2].
So what does all this teach us about quantum mechanics in the end (besides, of course, that quantum mechanics is awesome)? We have learned at least two things. Quantum systems are not either particle or wave. They are in fact neither because both concepts are classical in nature. This, to some extent, I stipulate we knew already. Wheeler knew it.  (Bohr, I contend, not so much). But what I've shown you is that quantum systems don't "change their colors" after measurement either, as Wheeler had advocated. They remain "neither", even when we think we pinned them down, because what I've shown you is that you can have them take on this coat or that, or any in between, years after the ink has dried (I mean, after the data were recorded). They (the photons, electrons, etc.) are not one or the other. They appear to you the way you choose you want to see them, when you interrogate a quantum state with classical devices.

Those devices cannot reveal to you the reality of the quantum state, because the devices are classical. Don't hate them because of their limitations. Instead, use them wisely, because what I just showed you is that, if used in a clever manner, they enable you to learn something about the true nature of quantum physics after all. As, for example, the experiment in [3] does.

References

[1] D.M. Greenberger and A. Yasin, "Simultaneous wave and particle knowledge in a neutron interferometer. Physics Letters A 128 (1988) 391-394.
[2] J.R. Glick and C. Adami, "Quantum information theory of the Bell-state quantum eraser". Phys. Rev. A 95 (2017) 012105. Full text also on arXiv
Note: Jennifer Glick is first author on this paper because she performed all calculations in it and wrote the first draft.
[3] Y.H. Kim, R. Yu, S.P. Kulik, Y.H. Shih, and M.O. Scully, “Delayed “choice” quantum eraser,” Phys Rev Lett 84 (2000) 1-5.

## Tuesday, December 6, 2016

### Can Life emerge spontaneously?

It would be nice if we knew where we came from. Sure, Darwin's insight that we are the product of an ongoing process that creates new and meaningful solutions to surviving in complex and unpredictable environments is great and all. But it requires three sine qua non ingredients: inheritance, variation, and differential selection. Three does not seem like much, and the last two are really stipulated semper ibi: There is going to be variation in a noisy world, and differences will make a difference in worlds where differences matter. Like all the worlds you and I know. So it is kind of the first ingredient that is a big deal: Inheritance.

Inheritance is indeed a bit more tricky. Actually, a lot more tricky. Inheritance means that an offspring carries the characters of the parent. Not an Earth-shattering concept per se, but in the land of statistical physics, inheritance is not exactly a given. Mark the "offspring" part of that statement. Is making offspring such a common thing?

Depends on how you define "offspring". The term has many meanings. Icebergs "calf" other icebergs, but the "daughter" icebergs are not really the same as the parent in any meaningful way.  Crystals grow, and the "daughter" crystals do indeed have the same structure as the "parent" crystals. But this process (while not without interest to those interested in the origins of life), actually occurs while liberating energy (it is a first-order phase transition).

The replication of cells (or people, for that matter) is very different from the point of view of statistical physics, thermodynamics, and indeed probability theory. Here we are going to look at this process entirely from the point of view of the replication of the information inherent in the cell (or the person). The replication of this information (assuming it is stored in polymers of a particular alphabet) is not energetically favorable. Instead, it requires energy, which explains why cells only grow if there is some kind of food around.

Look, the energetics of molecular replication are complicated, messy, and depend crucially on what molecules are available in what environment, at what temperature, pressure, salt concentrations, etc. etc. My goal for this blog post is to evade all that. Instead, I'm just going to ask how likely it is in general for a molecule that encodes a specific amount of information to arise by chance. Unless the information stored in the sequence is specifically about how to speed up the formation of another such molecule, however unlikely the formation of the first molecule was, the formation of two of them would be twice as unlikely (actually, exponentially so, but we'll get to that).

So this is the trick then: We are not interested in the formation of any old information by chance: we need the spontaneous formation of information about how to make another one of those sequences. Because, if you think a little bit about it, you realize that it is the power of copying that renders the ridiculously rare ... conspicuously commonplace. Need some proof for that? Perhaps the most valuable postage stamp on Earth is the famed "Blue Mauritius", a stamp that has inspired legendary tales and shortened the breath of many a collector, as there are (most likely) only two handfuls of those stamps left in the universe today.

 Blue (left) and Red (right) Mauritius of 1847.  (Wikimedia).
But the original plate from which this stamp was printed still exists. Should someone endeavor to print a million of those, I doubt that they each would be worth the millions currently shelled out for one of those "most coveted scraps of paper in existence". (Of course experts would be able to tell apart the copies from the originals because of the sophistication of forensic methods deployed on such works and their forgeries.) But my points still stands: copying makes the rare valuable ... cheaply ordinary.

When the printing press (the molecular kind) has not yet been invented, what does it cost to obtain a piece of information? This blog post will provide the answer, and most importantly, provide pointers to how you could cheat your way to a copy of a piece of information that would be rare not just in thus universe, but a billion billion trillion more. Well, in principle.

How do you quantify rarity? Generally speaking, it is the number of things that you want, divided by the number of things there are. For the origin of life, let's imagine for a moment that replicators are sequences of linear heteropolymers. This just means that they are sequences of "letters" on a string, really. They don't have to self-replicate by themselves, but they have to encode the information necessary to ensure that they get replicated somehow. For the moment, let us restrict ourselves to sequences of a fixed length $L$. Trust me here, this is for your own good. I can write down a more general theory for arbitrary length sequences that does nothing to help you understand. On the contrary. It's not a big deal, so just go with it.

How many sequences are there of length $L$? Exactly $D^L$, of course (where $D$ is the size of the alphabet). How many self-replicators are there among those sequences? That is the big question, we all understand. It could be zero, of course. Let's imagine it is not, and that the number is $N_e$, where $N_e$ in not zero. If there is a process that randomly assembles polymers of length $L$, the likelihood $P$ that you get a replicator in that case is
$P=\frac{N_e}{D^L}$       (1)
So far so good. What we are going to do now is relate that probability to the amount of information contained in the self-replicating sequence.

That we should be able to do this is fairly obvious, right? If there is no information in a sequence, well than that sequence must be random. This means any sequence is just as good as any other, and $N_e=N$ (all sequences are functional at the same level, namely not functional at all). And in that case, $P=1$ obviously. But now suppose that every single bit in the sequence is functional. That means you can't change anything in that sequence without destroying that function, and implies that there is only one such sequence. (If there were two, you could make at a minimum one change and still retain function.) In that case, $N_e=1$ and $P=1/N$.

What is a good formula for information content that gives you $P=1$ for zero information, and $1/N$ for full information? If $I$ is the amount of information (measured in units of monomers of the polymer), the answer is
$P=D^{-I}.$      (2)
Let's quickly check that. No information is $I=0$, and $D^0=1$ indeed.  Maximal information is $I=L$ (every monomer in the length $L$ sequence is information). And $D^{-L}=1/N$ indeed. (Scroll up to the sentence "How many sequences are there of length $L$", if this is not immediately obvious to you.)

The formula (2) can actually be derived, but let's not do this here. Let's just say we guessed it correctly. But this formula, at first sight, is a monstrosity. If it was true, it should shake you to the bones.

Not shaken yet? Let me help you out. Let us imagine for a moment that $D=4$ (yeah, nucleotides!). Things will not get any better, by the way, if you use any other base. How much information is necessary (in that base) to self-replicate? Actually, this question does not have an unambiguous answer. But there are some very good guesses at the lower bound. In the lab of Gerry Joyce at the Scripps Research Institute in San Diego, for example, hand-designed self-replicating RNAs can evolve [1]. How much information is contained in them?
 Prof. Gerald Joyce, Scripps Research Institute
We can only give an upper bound, because while it takes 84 bits to specify this particular RNA sequence, only 24 of those bits are actually evolvable. The 60 un-evolvable bits (they are un-evolvable because that is how the team set up the system) could, in principle, represent far less information than 60 bits. This may not be clear to you after reading this. But explaining this now would be distracting. I'll explain it further below instead.

Let's take this number (84 bits) at face value for the moment. How likely is it that such a piece of information emerged by chance? According to our formula (2), it is about
$P\approx7.7\times 10^{-25}$
That's a soberingly small likelihood. If you wanted to have a decent chance to find this sequence in a pool of RNA molecules of that length, you'd have to have about 27 kilograms of RNA. That's almost 60 pounds, for those of you that... Never mind.

The point is, wherever linear heteropolymers are assembled by chance, you're not gonna get 27 kilograms of that stuff. You might get significantly smaller amounts (billions of times smaller), but then you would have to wait a billion times longer. On Earth, there wasn't that much time (as Life apparently arose within half a billion years of the Earth's formation). Now, as I alluded to above, the Lincoln-Joyce self-replicator may actually code for fewer than the 84 bits it took to make it. But at the origin of this replicator was intelligent design. A randomly generated one may require fewer bits. We are left with the problem: can self-replicators emerge by chance at all?

This blog post is, really, about these two words: "by chance". What does this even mean?

When writing down formula (2), "by chance" has a very specific meaning. It means that every polymer to be "tried out" has an equal chance of occurring. "Occurring", in chemistry, also has a specific meaning. It means "to be assembled from existing monomers", and if each polymer has an equal chance to be found, then that means that the likelihood to produce any monomer is also equal.

For us, this is self-evident. If I want to calculate the likelihood that a random coin toss creates 10 heads in a row by chance, I take the likelihood of "heads" and take it to the power of ten. But what if your coin is biased? What if it is a coin that lands on head 60% of the time? Well then: in that case, the likelihood to get ten heads in a row is not 1 in 1,024 anymore but rather $(0.6)^{10}$, a factor of about 6.2 larger. This is quite a gain given such a small change in likelihood for a single toss (from 0.5 to 0.6). But imagine that you are looking for 100 heads in a row. The same change in bias now buys you a factor of almost 83 million! And for a sequence of 1,000 heads in a row, you are looking at an enhancement factor of .... about $10^{79}$.

That is the power of bias on events with small probabilities. Mind you, getting 100 heads in a row is still a small probability, but gaining almost seven orders of magnitude is not peanuts. It might be the difference between impossible and... maybe-after-all-possible. Now, how can this be of use in the origin of life?

As I explained, formula (2) relies on assuming that all monomers are created equally likely, with probability $1/D$. When we think about the origin of life in terms of biochemistry, we begin by imagining a process that creates monomers, which are assembled into those linear heteropolymers, and then copied somehow. (In biochemical life on Earth, assembly is done in a template-directed manner, which means that assembly and copying are one and the same thing). But whether assembly is template-directed or not, how likely is is that all monomers occur spontaneously at the same rate? Any biochemist will tell you: extremely unlikely. Instead, some of the monomers are produced spontaneously at one rate, and others at different rate. And these rates depend on local circumstances, like temperature, pH level, abundance of minerals, abundance of just about any element as it turns out. So, depending on where you are on a pre-biotic Earth, you might be faced with wildly different monomer production rates.

This uneven-ness of production can be viewed as a D-sided "coin" where each of the D sides has a different probability of occurring. We can quantify this uneven-ness by the entropy that a sequence of such "coin" tosses produces. (I put "coin" in quotes because a D-sided coin isn't a coin unless D=2. I'm just trying to avoid saying "random variable" here.) This entropy (as you can gleam from the Information Theory tutorial that I've helpfully created for you, starting here) is equal to the length of the sequence if each monomer indeed occurs at rate 1/D (and we take logs to base D), but is smaller than the length if the probability distribution is biased. Let's call $H_a$ the average entropy per monomer, as determined by the local biochemical constraints. And let's remember that if all monomers are created at the same exact rate, $H(a)=1$, (its maximal value), and Eq. (2) holds. If the distribution is uneven, then $H(a)<1$. The entropy of a spontaneously created sequence is then $L\times H(a)$, which is smaller that $L$. In a sense, it is not random anymore, if by random we understand "each sequence equally likely". How could this help increase the likelihood of spontaneous emergence of life?

Well, let's take a closer look at the exponent in Eq. (2), the information $I$. Under certain conditions that I won't get into here, this information is given by the difference between sequence length $L$ and entropy $H$:
$I=L-H.$   (3)
That such a formula must hold is not very surprising. Let's look at the extreme cases. If a sequence is completely random, then $H(a)=1$, and therefore $H=L$, and therefore $I=0$. Thus, a random sequence has no information. On the opposite end, suppose there is only one sequence that can do the job, and any change to the sequence leads to the death of that sequence. Then, the entropy of the sequence (which is the logarithm of the number of ways you can do the job), must be zero. And thus in that case the sequence is all information: $I=L$.  While the correct formula (3) has plenty more terms that become important if there are correlations between sites, we are going to ignore them here.

So remember that the probability for spontaneous emergence of life is so small because $I$ is large, and it is in the exponent. But now we realize that the $L$ in (3) is really the entropy of a spontaneously created sequence, and if $H(a)<1$, then the first term is $L\times H(a)<L$. This can help a lot because it makes $I$ smaller. It helps a lot because the change is in the exponent. Let's look at some examples.

We could first look at English text. The linear heteropolymers of English are strings of the letters a-z (let's just stick with lower case letters and no punctuation for simplicity). What is the likelihood to find the word ${\tt origins}$ by chance? If we use an unbiased typewriter (our 26-sided coin), the likelihood is $26^{-7}$ (about 1 in 8 billion), as ${\tt origins}$ is a 7-mer, and each mer is information (there is only one way to spell the word ${\tt origins}$). Can we do better if our typewriter is biased towards English? Let's find out. If you analyze English text, you quickly notice that letters occur at different frequencies: e more often that t, which occurs more often than a, and so forth. The plot below is the distribution of letters that you would find.

 Letter distribution of English text
The entropy-per-letter of this distribution is 0.89 mers. Not very different from 1, but let's see how it changes the 1 in 8 billion odds. The biased-search chance is, according to this theory, $P_\star=26^{7\times 0.89}$, which comes out about 1.5 per billion: an enhancement of more than a factor 12. Obviously, the enhancement is going to more pronounced the longer the sequence. We can test this theory in a more appropriate system: self-replicating computer programs.

That you can breed computer programs inside a computer is nothing new to those who have been following the field of Artificial Life. The form of Artificial life that involves self-replicating programs is called "digital life" (I have written about the history of digital life on this blog), and in particular the program Avida. For those who can't be bothered to look up what kind of life Avida makes, let's just focus on the fact that avidians are computer programs written in a language that has 26 instructions (conveniently abbreviated by the letters a-z), executed on a virtual CPU (you don't want digital critters to wreak havoc on your real CPU, do you?) The letters of these linear heteropolymers have specific meanings on that virtual CPU. For example the letter 'x' stands for ${\tt divide}$, which when executed will split the code into two pieces.

Here's a sketch of what this virtual CPU looks like (with a piece of code on it, being executed)
 Avidian CPU and code (from [2]).
When we use Avida to study evolution experimentally, we seed a population with a hand-written ancestral program. The reason we do this is because self-replicators are rare within the avidian "chemistry": you can't just make a random program and hope that it self-replicates! And that is, as I'm sure has dawned on the reader a while ago, where Avida's importance for studying the origin of life comes from. How rare is such a program?

The standard hand-written replicator is a 15-mer, but we are sure that not all 15 mers are information. If they were, then its likelihood would be $26^{-15}\approx 6\times 10^{-22}$, and it would be utterly hopeless to find it via a random (unbiased) search. It would take about 50,000 years if we tested a million strings a second, on one thousand computers in parallel. We can estimate the information content by sampling the ratio $\frac{N_e}{26^{15}}$, that is, instead of trying out all possible sequences, we try out a billion, and take the fraction of self-replicators to be representative of the overall fraction. (If we don't find any, try ten billion, and so forth).

When we created 1 billion 15-mers using an unbiased distribution, we found 58 self-replicators. That was unexpectedly high, but it pins down the information content to be about
$I(15)=-\log_D(58\times 10^{-9})\approx 5.11 \pm 0.04$ mers.
The 15 in $I(15)$ reminds us that we were searching within 15 mer space only. But wait: about 5 mers encoded in a 15 mer? Could you write a self-replicator that is as short as 5 mers?

Sadly, no. We tried all 11,881,367 5-mers, and they are all as dead as doornails. (We test those sequences for life by dropping them into an empty world, and then checking whether they can form a colony.)

Perhaps 6-mers, then? Nope. We checked all 308,915,776 of them. No sign of life. We even checked all 7-mers (over 8 billion of them). No colonies. No life.

We did find life among 8-mers, though. We first sampled one billion of them, and found 6 unique sequences that would spontaneously form colonies [2]. That number immediately allows us to estimate the information content as
$I(8)=-\log_D(6\times 10^{-9})\approx 5.81 \pm 0.13$ mers,
which is curious.

It is curious because according to formula (2) waaay above, the likelihood of finding a self-replicator should only depend on the amount of information in it. How can that information depend on the length of sequence that this information is embedded in? Well it can, and you'll have to read the original reference [2] to find out how.

By the way, we later tested all sequences of length 8 [3], giving us the exact information content of 8-mer replicators as 5.91 mers.  We even know the exact information content of 9-mer replicators,   but I wont't reveal that here. It took over 3 months of compute time to get this, and I'm saving it for a different post.

But what about using a biased typewriter? Will this help in finding self-replicators? Let's find out!
We can start by using the measly 58 replicators found by scanning a billion 15-mers, and making a probability distribution out of it. It looks like this:
 Probability distribution of avidian instructions among 58 replicators of L=15. The vertical line is the unbiased expectation.
It's clear that some instructions are used a lot (b,f,g,v,w,x). If you look up what their function is, they are not exactly surprising. You may remember that 'x' means ${\tt divide}$. Obviously, without that instruction you're not going to form colonies.

The distribution has an entropy of 0.91 mers. Not fantastically smaller than 1, but we saw earlier that small changes in the exponent can have large consequences. When we searched the space of 15 mers with this distribution instead of the uniform one, we found 14,495 replicators among a billion tried, an enhancement by a factor of about 250. Certainly not bad, and a solid piece of evidence that the "theory of the biased typewriter" actually works.  In fact, the theory underestimates the enhancement, as it predicts (based on the entropy 0.91 mers) an enhancement of about 80 [2].

We even tested whether taking the distribution generated by the 14,495 replicators, which certainly is a better estimate of a "good distribution", will net even more replicators. And it does indeed. Continuing like this allows your search to zero in on the "interesting" parts of genetic space with more laser-like fashion, but the returns are, understandably, diminishing.

What we learn from all this is the following: do not be fooled by naive estimates of the likelihood of spontaneous emergence of life, even if they are based on information theory (and thus vastly superior to those who would claim that $P=D^{-L}$). Real biological systems search with a biased distribution. The bias will probably go "in the wrong direction" in most environments. (Imagine an avidian environment where 'x' is never made.) But in a few of the zillion of environments that may exist on a prebiotic Earth, a handful of them might have a distribution that is close to the one we need. And in that case, life suddenly becomes possible.

How possible? We still don't know. But at the very least, the likelihood does not have to be astronomically small, as long as nature will use that one little trick: whip out that biased typewriter, to help you mumble more coherently.

[1] T. A. Lincoln and G. F. Joyce, Self-sustained replication of an RNA enzyme, Science 323, 1229–1232, 2009.
[2] C. Adami and T. LaBar, From entropy to information: Biased typewriters and the origin of life. In: “From Matter to Life: Information and Causality” (S.I. Walker, P.C.W. Davies, and G. Ellis, eds.) Cambridge University Press (2017), pp. 95-113. Also on arXiv
[3] Nitash C.G., T. LaBar, A. Hintze, and C. Adami, Origin of life in a digital microcosm. To appear.