Eqs

Saturday, July 21, 2018

How to pack your library: A guide

That's right folks, this is not a science post. After 45 posts about information, quanta, intelligence, and what not, a how-to guide? Well I only have one blog, and I didn't know where to put this stuff, that I think could be helpful to others, because I've done this several times and I learned a bunch. So now you get to read about how to move your library of precious books from one house to another.

First things first. If your library consists out of a pile of paperbacks you have largely forgotten about, then this post is not for you. In fact, I'm wondering how you even got to this page. I'm talking to people whose library looks like this:
The Upstairs Library
What you see in the pic is actually only half my library. The other half is behind you from this view, along with two shelves in the basement. It's a bit more than a thousand books. 

Thing is, it is a collection of mostly First Edition and rare books, so you would not want to just throw them into a box when you move. And I am moving. How do you make sure the books survive the trip?

I've had to move this library before (from California to Michigan), and I've learned some things from that move. One of the things you need to know about movers is that they don't really care very much about what is inside a box they are moving. They are under time pressure, and if something breaks then insurance will pay for it. So it is up to you to keep the stuff safe.

When I moved last,  I wrapped every book individually into a foam material. That protected the books well (all books survived just fine), but it was expensive and time consuming. If you have not moved like this before, I can assure you that packing material is expensive. By packing each book individually in the material that you would ordinarily wrap china, you not only buy a lot of that material (at about $15 a roll) but you also pack fewer books per box. And packing each book into foam takes time. For this move, I was looking to do better. Here's what I came up with.

First, let us think what you have to protect against. There are two main threats to rare books. One is physical injury to the box, which will impact the books inside the box from the outside. Second is rubbing of books against each other, damaging their covers or dust jackets. To protect against rubbing you do not need to wrap the books in foam, you just need to wrap them. To protect against injury to the box, you need to protect the box, not every book.  My solution was to use paper to wrap each book individually, and line the box with foam. It doesn't protect you against catastrophic box intrusion, but it should work as well as the previous method, with a substantial cost saving.

First, buy some boxes. What size do you need? The answer is: small, as small as you can find. You may think: "Shouldn't I get a big box where I can get as many books in as I can?" Well if you did this, it would become so heavy that your movers will either refuse to touch it, or else injure it while struggling with it, possibly for spite. So go small. In my previous move I used "book size" boxes (not all companies have them; they are one-foot cubes), but for this move, I went with the "Extra Small" size. The dimensions are 15'' long, 12'' wide, and 10'' high. In my case they were made by Home Depot, but I suspect other retailers carry that size.

I like this box because it is not high. Books are not high. In fact, almost no book is higher than 10 inches (but some are close). So you will be able to fit your books standing up in this box, and if there is some room at the top, you can put a layer of books there. If not, you fill with paper, as I'll show.

Here's a pic of some of the boxes I bought. (I ended up getting about 65 of those).

How many do you need? This is not easy to answer. I think you can get at least 15 books into such a box, up to 25 books if they are small. So, on average, think 20 books per box. I'll show you a few pics later.

OK, so let's first make the boxes. I assume you know how to tape up a standard box. I double tape the bottom (and later the top), and also tape the sides of the box. I used the tape below, after I tried many different types that all were awful. It's not too flimsy, and the cutter actually works.



Now let's line them. For that, I use dish foam, 1 foot high, separated into 1 ft squares. There are 50 squares in a roll.


One square will perfectly cover each 12 inch side of the box. For the 15'' side, I cut 3'' to the right of the perforation, leave that piece attached to the next and cut 3 inches to the left of the following perforation. It looks like this: 

Pieces 3 and 5 are 15'' and line the sides. Piece 4 is 18" and will cover the bottom, with a little overlap to the sides. Each roll will give you enough for 8 boxes, with a little bit to spare (but sometimes the beginning of the roll is unusable). 

I use the tape to glue the foam to the sides:

Do the same on all sides:

Don't worry that the bottom does not stick to the side. Once the bottom is in it'll be fine. Besides, the books will press against it.

Remember the box is 10'' high, so two inches reach up to the cover.

With the foam in, we're ready to wrap books. I ended up using paper, which is cheap and doesn't interact with the covers of the books. These rolls here have sheets that are 2 feet x 2 feet, and there's seventy in each roll. One sheet is enough for one book. You begin like this:
Fold over the bottom, then the top, then fold the left over on top of the book, then flip it over. This way, the fragile edges of the book get double coverage. Now it looks like this:
Now I just take small piece of tape to finish it off:

If you have a cat, she will probably want to assist you, by the way. I suggest wrapping a bunch of books before packing a box, because that way you can select the right sizes to optimally fill the box.

If you have fairly slim books, it does not hurt to pack two of them together, as their dust jackets will not be able to rub against each other:
Packed together, they are just like one thicker book, and you saved yourself one sheet:
You notice that I'm not very precise when wrapping, because this is altogether unnecessary. 

The roll usually has a little wave at the bottom, which is perfect to place the book on before you begin to wrap:

(Actually you can do this any which way you want. I'm just showing off this book. Yes, that's a first edition in first state dust jacket. And no you can't have it.) 

Time to put books into the box. There are two ways to do this, depending on the size of the book. Big ones go in like this:
Often there is room to put a few books on top:
Once you've done that, crumple a few sheets to make sure the box is completely "full" after closing it:

Finally, close and number:

If you are really worried about the contents, you can of course make a list for each box. I did not do this, but I did use an app called "Sortly" that allows me to take a picture of each box, name it, tag it, and a field where I can write something about the contents. For the box above, I just wrote "Nonfiction", because these books where from the nonfiction shelf. If the moving company loses a box, then I have proof that it existed, what it looked like, and I have some idea what's inside. 

I also have some oversized books (the "coffee table" kind). For those, I used bigger boxes (the "small box" type), and because these boxes are very heavy, I used the "extra strength" type.

After all is said and done, your library should look like this:


These are the 66 "extra small" boxes. There are also nine "small" boxes not in the view, for a total of 75. 

Yes, this is time-consuming. It helps a lot to develop a routine so that every step is essentially the same. That way, you can achieve something akin to an assembly line. If you have your strips of adhesive tape all ready and lined up, you can do a book in 20 seconds flat. 

In my case, all these books will go to storage first, because I won't have a library in the sabbatical apartment we will be renting. If you have to store your books too, let me give you one final piece of advice: get climatized storage. We'll put the piano into storage also, so the books and the piano will be reunited there (and of course pianos have to be stored climatized also). I'm assuming, of course, that if you have a library like mine, well then you also have a grand piano. It just comes with the territory. 







Wednesday, March 21, 2018

The science of "Interstellar" revisited: How to travel through a wormhole

The movie "Interstellar" has not only fascinated moviegoers: it also has created a discussion among scientists whether any and all of the science discussed in the movie is accurate. To be sure, this is not the fate that befalls your average Sci-Fi movie where the laws of physics are routinely (and often egregiously) broken over an over again. I'm sure you can find a list simply by googling appropriately. Try "physics violations in sci-fi movies", for example. 

But this movie is different. This move has famed theoretical physicist (and newly-minted Nobel laureate) Kip Thorne as an executive producer. 

Kip Thorne (source: Wikimedia)
Not only did Kip advise director Christopher Nolan about the science (including talking him out of using time travel as a plot device), but he also spent time to calculate what a black hole event horizon should look like. He is quoted as saying, for example: 

"For the depictions of the wormholes and the black hole, we discussed how to go about it, and then I worked out the equations that would enable tracing of light rays as they traveled through a wormhole or around a black hole—so what you see is based on Einstein's general relativity equations." 

This is certainly unusual for a Sci-Fi movie. Indeed, the renderings of the equations Kip provided have have been published in two papers: one aimed at the general relativity crowd, and one aimed at the SIGGRAPH audience. 

To boot, Kip Thorne not only is an advisor: he co-wrote the initial script of the movie (that originally had Steven Spielberg attached as director). But according to Kip, the final story in "Interstellar" only bears a fleeting resemblance to his initial script.   

So if "Interstellar" is so infused with Kip's science, why would there be a need to go "revisit the science of 'Interstellar', as this blog post titillatingly promises.

Black holes, that is why! 

"Interstellar" has black holes front and center, of course. And having Kip Thorne as an advisor is probably as good as you can get, as he is co-author of the magnum opus "Gravitation", and also wrote "Black Holes and Time Warps" for the less mathematically inclined. I have both volumes, I should disclose. I believe my copy of "Black Holes" is signed by him.

Having said all this, and granted my admiration with his science and his guts (he defied federal funding agencies by writing proposals on closed time-like loops) I have a bone to pick with the science depicted in the movie.

Lord knows, I'm not the first one (but perhaps a tad late to the party). So just give this long moribund blog post a chance, will you?

This is no time to worry about spoilers, as the movie came out a while ago. The story is complex, but a crucial element of the story requires traveling into, and then somewhat miraculously out of, a black hole.

When you get past the event horizon, as our hero Joseph Cooper (played by Matthew McConaughey) does, could there be any way back (as depicted in the movie)? At all? Without breaking the laws of physics and therefore credulity at the same time?

This blog post will tell you: "Yes actually, there is", but it is not an answer that Kip Thorne, or anyone else involved with the Interstellar franchise, might cozy up to. It is possible, but it involves murder. Read on if that's not immediately obvious to you.

I am going to ask and answer the question: "If something falls into a black hole, and that black hole is connected to another black hole for example by a wormhole, can you come out on "the other side"?

But I have to issue a quantum caveat: I'm going to assume that the two black holes are connected via quantum entanglement as well. Black holes that are connected this way have been considered in the literature before (Google "ER=EPR" if you want to learn more about this).
Two black holes connected by a wormhole (Source: Wikimedia).

The main point for us here is that the two "mouths" of the "Einstein-Rosen bridge" (that's what the two black holes that are connecting two different regions of spacetime are called) are quantum coherent. And of course you figured out that "ER" stands for "Einstein-Rosen", and "EPR" abbreviates "Einstein-Podolski-Rosen", the three authors that investigated quantum entanglement and its relation to quantum reality in the famous 1935 paper. Now, previously people had argued that the wormhole connecting the two black holes would not be stable (it would collapse), and that anyway it could not be traversed. But later on it was shown (and Kip Thorne was involved in this work) that wormholes could be stabilized (maybe using some exotic type of matter), and possibly could also be traversable. So I'm not going to debate this point: I'll stipulate that the wormhole is indeed stable, and traversable. What I'm concerned with is the escape. Because remember: "What goes on inside of a black hole stays inside the black hole"?

"So what's this about a murder?"

Hold your horses. Let me get some preliminaries off my chest first. You've been to my blog pages before, right? You've read about what happens to stuff that falls into black holes, no? If any of your answers is "Umm, no?", then let me gently point your browser to this post where I explain to you the fate of quantum information in black holes. In the following, I will shamelessly assume you have mastered the physics in that post (but I will gently issue reminders to those whose memory is as foggy as mine own).

So what you the reader of my pages know (that, alas, most of the people working in the field have yet to discover) is that when you throw something in a black hole, several things happen. The thing (and we usually think of a particle, of course) is either absorbed or reflected (depending on the particle's angular momentum). Let's say it is absorbed. But Einstein taught us in 1917 that something else happens: the vacuum makes a copy of what you threw in, along with an anti-copy.

I pause here for those readers that just struggled with an onset of what feels like an aneurysm.

To those readers: please read the post on the "Cloning Wars", which explains that, yes, this happens, and no, this does not violate the no-cloning theorem.

A copy and an anti-copy? Well, yes, if you're gonna create a copy and you don't feel like violating conservation laws (like, pretty much all of them) then you have to create a copy and an anti-copy. If you throw in an electron, an electron-positron pair will be created (Einstein says: "stimulated"), where the positron is the anti-copy. If you throw in a proton, Einstein's stimulated emission process near the black hole will create a proton-anti-proton pair. The copy stays outside of the black hole, and the anti-copy now finds itself inside of the black hole, alongside of the original that the black hole just swallowed.

So let's just keep a count, shall we? Inside the black hole we have the original and the anti-copy, outside we have the copy. You can use the outside copy to make sure the laws of information conservation aren't broken, as I have argued before, but today our focus is on the stuff inside, because we imagine we threw Cooper into the wormhole. The black hole dutifully responds by stimulating a Cooper clone on the outside of the black hole, while the original Cooper, alongside an anti-Cooper, is traveling towards the singularity, which in this case connects this black hole to another one, far far away.

A this point I feel I should have a paragraph arguing that the vacuum could really produce something as complex as a Cooper-pair (see what I did there?) via stimulated emission. This is not a terrible question, and I'm afraid we may not really be able to answer that. It sure works on elementary particles. It should also work on arbitrary pure quantum states, and even mixed ones. We don't have an apparatus nearby to test this, so for the sake of this blog post I will simply assume that if you can somehow achieve coherence, then yes the vacuum will copy even macroscopic objects. Just take my word for it. I know quantum.

But this interlude has detracted us from Cooper and his anti-twin traveling through the suitably stabilized wormhole, towards the event horizon on the other black hole that the entry portal is connected with, in both an ER and an EPR way. They are now inside of a black hole, yearning to be free, and let's imagine they have the time to ponder their existence. (They are not holding hands, by the way. That would be utter annihilation).

What is it like inside of a black hole, anyway? It's a question I have been asked many times, be it on a Reddit AMA or when giving presentations called "Our Universe" to elementary school children. (Black holes always come up, because, as I like to say, they are like dinosaurs.)

If you can ignore the crushing gravity (which you can if the black hole you inhabit is big enough, and you are far enough away from the center) then the black hole doesn't look so different from the universe you are used to. But there is a very peculiar thing that happens to you. Now, if you happened to inhabit a decent-sized planet like Earth (not a black hole), then if you shoot a small rocket up into the sky, it falls back down somewhere far away. If you can make a much more powerful rocket, it may go up but, when coming  back down, will miss the surface of the planet. And keep missing it: it is actually in orbit. But if your rocket is even more powerful, it could leave your planet's gravitational field altogether.

But when you live inside of a black hole, gravity is so strong that no rocket is powerful enough to leave orbit. So you decide to fire a ray-gun instead because light surely goes faster than any rocket, but you then realize that gravity also bends light rays. And because you're in a black hole, the best you can do is that your light ray (after going up and up) basically goes in orbit around the black hole. Just about where Schwarzschild said the event horizon would be.

So you see, when you are inside a black hole, nothing can go out. Everything you shoot out comes back at you (so to speak). There is actually a word for something like that: an area of space-time that you cannot penetrate: it is called a white hole.

So basically, if you are inside a black hole the rest of the universe looks to you like a white hole.

I've mentioned this before in the last paragraph of a somewhat more technical post, so if you want to revisit that, please go ahead, because it has some interesting connections to time-reversal invariance.

But now we understand the Cooper-pair's predicament. No Cooper can escape the black hole, because the horizon they are looking at is a white-hole horizon. Everything is lost, right?

Well, not so fast. In the move Interstellar, some Deus-Ex-Machina extracts Cooper from the black hole, but how could this work in a world where the laws of physics aren't just mere suggestions?

The answer is obvious isn't it? And it involves murder. As promised.

Stimulated emission was able to breach the horizon in the first place, by creating a Cooper-pair. Can Anti-Cooper do the same thing from the inside? We now treat the horizon that separates the black hole interior from the outside as a white-hole horizon. And if you do the math right (and I did in the cloning paper) the anti-clone that is hurled into the white-hole horizon will create an anti-anti-clone on the other side. That anti-anti-clone is, of course, a clone: it is Cooper himself, resurrected on the outside. Somewhat stunned, we assume. 

But that is impossible, right? Because there already is a Cooper on the outside: the clone that was created when Cooper was absorbed into the black hole in the first place! And the no-cloning amendment of the constitution is surely enforced stringently. So that is where the murder comes in: it turns out that the only way that anti-Cooper can stimulate Cooper on the outside is if at the same time the original Cooper clone is sent into the black hole on the other side! 

And that's when you realize that this whole operation is a huge waste of time. Because you can only achieve this if Cooper-clone travels towards the other black hole horizon through normal space time! The only way you can retrieve Cooper that has gone through the Einstein-Rosen bridge is to have Cooper-clone travel in the conventional way, and "stimulate" the Cooper clone out. No other Cooper could do that, if you want to preserve coherence. So that means that you have to sacrifice Cooper to save Cooper, and at the same time you can't travel faster than light through a wormhole for the same reason: you have to travel conventionally to get you out. Sometimes not violating the laws of physics can lead to a real letdown! And no funding for closed time-like curves either!

The moral of the story then is two-fold: 

1.Movies don't have to abide by the laws of physics

2.The laws of physics have a way to make you obey them. Sometimes in simple ways, and sometimes they are more subtle. But they always always win. And that's a good thing.























Wednesday, March 14, 2018

Remembering Stephen Hawking


The passing of the great physicist Stephen Hawking today at the age of 76 fills me with sadness for many different reasons. On the one hand, it was inspiring to witness that, seemingly, the power of will and intellect can hold such a serious illness at bay for so long.  On the other hand, I am also sad that I never got to talk to him, and perhaps explain to him my take on his great body of work.
Stephen Hawking (1942-2018)
Source: Wkimedia
I ran into Stephen several times when I was at Caltech (which Hawking visited regularly), but a situation never developed in which we could “chat”, as it were. One day in 1992 I was walking with Gerry Brown, a nuclear theorist who also visited Caltech each year in the Spring, together with Hans Bethe, Brown’s collaborator on the theory of binary neutron stars, along the lovely paths in Arcadia’s Arboretum
Hans Bethe and Gerry Brown, at Caltech in 1992
From afar, both Gerry and I spotted Hawking being pushed by his nurse along the path. Realizing that our paths will cross, Gerry and I both tried to get Hans to stop and engage Hawking, imagining that Hawking would be delighted to meet the eminent Bethe, winner of the Nobel Prize in Physics in 1967 for his discovery of how stars generate the energy to shine.  However, Bethe curiously demurred. Later I asked Gerry why Hans did not take this opportunity, and he answered: “You’d be surprised how shy Hans can be.”
Hans passed away thirteen years ago, Gerry left us almost eight years later, and now Stephen is gone too. For me, it is always difficult to imagine that these great minds could simply cease to be. But after all, Hawking is known to have said “I regard the brain as a computer which will stop working when its components fail”, and in that he was surely correct. But of course, these great minds have left a legacy that is immortal, and we will keep them in our memory as long as we think about the stars, black holes, and the vastness of the universe.

Wednesday, October 18, 2017

Survival of the Steepest

Most textbooks tell you that the evolutionary process is really quite simple: three rules are all that's necessary: inheritance, variation, and selection. It is indeed true that these three rules are all that's needed for evolution to occur, but that does not mean that the evolutionary process is simple. In fact, quite the opposite. Real systems evolve in many different ways because the process takes place (is "instantiated") in many different environments. How the evolutionary process unfolds depends very much on those environments: they shape the mode and manner of adaptation.

Let me explain what I mean in more detail. How do the three necessary elements depend on the environment? The inheritance part of the "Darwinian Triad" isn't actually that susceptible to environmental variations (although there are cases). Variation (that is, changes incurred during transmission of genetic information from parent to child), and in particular selection, are very much subject to modulation by the environment. There are likely dozens of books written about the different ways that changes can occur to the carriers of information (yes, DNA) during replication, and this post won't be the one dwelling on those details. They are important, just not for this post. Here, we'll discuss the third pike of the Triad: selection.

It is very unlikely that you are reading this without being fairly well-versed in evolutionary biology, but just in case you got here from reading about black holes, here it is in a nutshell. Evolution occurs because random changes (variation) in the genetic information (that which is inherited) change the frequency at which that information is represented in a population, simply because the information is about how to make many copies of that information. Thus, information that is not increasing the fit of the organism to its environment tends to become less-represented, while information that leads to a better fit will increase in numbers, simply because a good fit means many copies. I realize that this may not be the way you have been taught the word "fitness" in the context of evolutionary biology in class, but it should have been.

Selection, by this account, really just means that "good info will be rewarded", in the sense that good info will increase the info-carrier's number of copies in the population. And because the world in which we (and this information) lives is finite, when one type gets to have more copies, there must be fewer of the less-fit types. As a consequence we will find that, over time, fitness along with the information on how to make a fit organism, increases over time on average. Most of the time.

Only most of the time?

Well yes, this law about the increase of information in evolution is not an absolute one. For example, because the information in your genome is about the world in which you live, it follows that when the world changes then some of what used to be information is ... not ... information anymore. And because information makes you fit, your fitness will almost always drop when the world has changed for you. But there are other cases where fitness can decrease in selection, and this post is about one of them.

There is a fairly classical (by now) case where lower fitness is actually selected for, and we need to talk about it briefly because it is relatively well-known and--it turns out--is somewhat of a bookend to the case I'll introduce you to shortly. This case is called the "Survival of the Flattest" effect (this is a pun meant to remind you that due to this effect, it is not the fittest that survives). Indeed, in this particular case fitness, measured in terms of the number of copies that the information--meaning the genome--can make, can drop in evolution. This happens if the mutation rate is very high, and there are fitness peaks that are not so very high, but kind of "flat" instead, so that mutated copies of the information are mostly still informative. In order not to make this post too long, I'l refer you to the Wiki page that describes this effect, and just show the video from that Wiki link that describes the effect, below.

The authors of this video were, incidentally, a grad student (Randy Olson) and a postdoc (Bjørn Østman) from my lab at Michigan State University at the time. And the not so humblebrag disclosure is that the effect was actually discovered in my lab at Caltech in 2001. You can read all about that by googling it.

But let's get back to the matter at hand. In "Survival-of-the-flattest", organisms that occupy flat (but not terribly high) fitness peaks can outcompete populations that live on high (and very pointy) peaks, but only when the mutation rate is high enough (see schematic drawing in the figure below).

Survival of the flattest. At low mutation rate (top panel), the population living on peak A outcompetes the one living on peak B. At high mutation rate (lower panel), the steepness of peak A implies that many of the mutants on that peak have very low fitness, and the population as a whole will therefore grow poorly. The population on peak B, on the other hand, has mostly neutral mutants, and will outcompete the population at A. (Figure courtesy of Claus Wilke). 
Because of this "lower fitness outcompeting higher fitness" business, the effect violates the semi-edict of "ever-increasing information/fitness". It is a semi-law because, truthfully, nobody ever really declared it a law. It is a "most-of-the-time" law, much like the second law of thermodynamics, come to think of it.

The effect we'll discuss presently concerns population size rather than mutation rate, and just like the "survival-of-the-flattest" effect it violates the semi-edict; the almost-law. We shall call this new effect "survival-of-the-steepest". It goes without saying that this lame attempt at analogizing with the survival-of-the-flattest moniker couldn't be more obvious. But since that original moniker was due to Rich Lenski, and he did not object to this particular use, we'll just run with it and see if it sticks. (It may not.)

Come to think of it, Lenski had suggested an even better pun for this effect: "Drift Dodger". You'll appreciate that one more after digesting the stuff below. "Survival of the drift dodger". It could work, no?

So let's first talk a little bit about small populations. You remember, of course, that evolution is something that happens in populations. The reason why it cannot happen to a single isolated individual is the essence of one of the three components of the Darwinian Triad: selection. If there is only one organism, then you cannot have differential survival. There are no differences, because there is only one. You cannot win if there is no competition.

If you have two individuals, you could in principle have evolution because there can be differences between the two. Yet, I'm sure I can convince you that it will be extremely difficult for evolution to proceed in this contrived case. The reason for this is that organisms can die. Yes, you read that right: if organisms could not die, then two would be enough to sustain an evolutionary process in principle. But death is inevitable in a finite world with replication, so that settles that. Actually, let me be more precise: death, by itself, does not prevent evolution in small populations. It is instead random death that thwarts evolution, because if death only affected the lower fitness type, then evolution could still work with only two individuals. However, if death strikes randomly, then half the time the fitter one of the pair would be removed, and the one left over then replicates to restore the pair. Because the less-fit individual got to reproduce, the overall population fitness has declined.  This is the essence of genetic drift. Now imagine one of the pair is again struck by a mutation. We know that there are far more mutations that reduce fitness than there are those that increase fitness (it is easier to break things by chance than to improve them by random changes). After such a mutation, there is a new lower fitness organisms, and if death strikes randomly, there is again a fifty/fifty chance that the lower fitness organism remains.

You can now understand that, unless beneficial mutations are at least as common as deleterious mutations, this process of a gradual loss of fitness will doom the population. And in the case I just described, this is precisely Muller's ratchet, an inexorable decline in fitness that will doom the small population to extinction. In this extreme example this happened to a population of two, but the process happens anytime the population is extremely small (say, 10 or smaller in most cases).

It turns out that here is a way to resist this decline, but for a population to resist it, it must "live" on a particular type of fitness peak: one with steep slopes. The steeper, the better.

Imagine you have a population that has been dropped into a fitness landscape that has only two peaks: a lower one with steep slopes, and a higher one with gentle slopes (as in the picture below).
A small population will not be able to climb the peek with gentle slopes, because genetic drift will prevent small advances to fix. However, such a population will be able to climb peaks with steep slopes (such as the one on the left) because selection is effective for large advances.
A large population will find itself on the blue peak with gentle slopes, because the population that climbed the red peak will be forced into extinction due to the higher fitness of the blue peak. However, if the population size is small, the tables are turned. The small population will not be able to ascend the blue peak, as drift will consistently (and without doubt annoyingly) "throw it back down": mutations with small effect cannot go to fixation when the population size is small. But that intrepid small population can climb the red peak, because it requires big steps to get up there. It might take a while to discover these steps, but once they are found, the population will safely occupy and maintain itself on the the steep peak. This means that this population is robust to drift on such a peak. Were we to transplant the same population to the blue peak it could not maintain itself there, and drift back down that gentle slope. On the red, steep, peak, however, the population is robust to drift. It can dodge the drift. There you have it.

So now we have seen that there are two important exceptions to the "survival of the fittest" rule, the law that stipulates that those genotypes that replicate the fastest should always be the ultimate winners of the competition. This rule holds only at very small mutation rates, and for very large population sizes.

Is there is a more general rule that predicts the winner even at finite mutation rates and small population sizes? A universal law, in a way, that holds without restrictions in mutation rate and population size? I believe that there is such a law indeed, but you'll have to wait for another blog post to find out what it is! That law (I am teasing) is inspired by thermodynamics, which tells us that that small energy (just as high fitness) is not always the winner. That law will set fitness free once and for all.

The research described in this post is based on the paper linked to below. It is openly accessible to everyone.

T. LaBar and C. Adami, "Evolution of drift robustness in small population", Nature Communications 8 (2017) 1012.






Sunday, June 25, 2017

An evolutionary theory of music

"The beauty of music is in the ear of the beholder", we are always told. Or perhaps we are not always told this, but I imagine that we should be told that. Because while I like a lot of music that other people like, I don't always agree with what other people say is--not just music--but insist is "good" music. I'm unabashedly a "romanticist": I love the piano concertos of Rachmaninov, and most of what Chopin wrote. I'm a Beethoven guy, but I have learned to like Bach, and there is some stuff that Mozart wrote that should be in the Hall of Fame of Music, compared to all music ever written. 

But when it comes to Stravinsky, Schönberg, Alban Berg, or Karlheinz Stockhausen, I'm at a loss. I don't understand it. It doesn't sound like music to me. 

Is it them, or is it me? Is there something about the music that the aforementioned composers wrote that is too complex for my brain? What is the complexity of music, anyway? Is it obvious that some music is just more complicated than some other music, and that it takes more sophisticated brains than mine to appreciate the postmodern kind of music?

I have to say that there is some evidence in favor of the position that, yes, my brain is just not sophisticated enough to appreciate Stockhausen. That I'm just not bright enough for Berg. Too rudimentary for Rautevaara. You get the drift.

The evidence is multifaceted. As a young lad, I just simply assumed that I was right in loathing all this "modern music" nonsense (even though I had rehearsed Britten's War Requiem as a 11-year old, a memory that I would only recover much later). I liked Beethoven, Rachmaninov, and Chopin. Then I saw a piece on TV that would change my perception of music forever. I saw a "Harvard Lecture" by Leonard Bernstein, a director and all-around musical genius that I admired (I was perhaps 18). It is one of the now famous "The Unanswered Question" Lectures of Bernstein (but I did not know that when I saw it.) Bernstein lectured about Schoenberg (as he spelled his name after he moved to the US). I can still remember my astonishment, as he took apart Schoenberg's  "Verklärte Nacht" and lectured me about its structure, and pointed out the references to earlier classical music (sometimes inverted). I realized then and there that I had been extremely naive about music. 
Arnold Schoenberg, by Egon Schiele
Source: Wikimedia

That does not mean that I immediately came to like Schoenberg's music. I was still wondering whether anybody really liked it, as opposed to appreciate it on an intellectual level.

Then came the time when I was called to sing Stravinsky. 

We are performing a formidable jump here, from my formative years to a period where I was a Full Professor at the Keck Graduate Institute, a university specializing in Applied Life Sciences in Claremont, California.  They had a choir there, and as I had sung in a choir as an 11-year old (culminating in the aforementioned Britten episode that never resulted in a performance) I figured I'd get Mozart's Requiem off of my bucket list. I admit I'm a bit obsessed with this piece of music. I perhaps know way too much about it at this point. Anyway, the Claremont College Choir was going to perform it, so I signed up. (Well, you don't just sign up: you have to audition, and pass.)

During the audition, I was asked to sing some fairly atonal stuff. I had no inkling whatsoever that the choir director was testing me on Stravinsky's Symphony of Psalms. But after I was admitted to the choir, I learned that this was the piece we were going to perform alongside the Requiem.  

I bought the CD, and listened to it on my daily commute from Pasadena to Claremont. At first, it sounded like cats were being drowned. I later asked people who were at the performance, and got similar reactions. I thought I could never never sing that. So I broke out Garageband or Logic Pro and wrote the bass track (that was the voice I was to sing) onto the music, so that I could rehearse it. 

With practice, the unthinkable happened. I started to understand the music. I started to like it. I started to be moved by it. I slowly realized that this was great music, and that I was completely incapable to have realized this upon first hearing it.

This is, by the way, a dynamic that is not completely limited to music. Similar things can happen to you in the appreciation of mathematics. There is some mathematics that is utterly obvious. It is obviously beautiful, and everybody usually appreciates that beauty. Simple number theory, for example. The zeta function. But then I found that there was mathematics that I could not easily grasp. There is no beauty in mathematics that you do not understand. It may look like gibberish to you, as if the author just juxtaposed symbols with the intent to obfuscate. I still believe that there is mathematics out there that is just gibberish, but the beauty lies in those pieces that you learn to appreciate after "listening" to them for as long as it takes until you start to understand them.

So, our brains (certainly mine) are not exactly reliable judges of beauty. What is beauty anyway?

If you start to think about this question, you've got to take into consideration evolutionary forces. What we call "beauty", or "beautiful", is something that appeals to us, and there are plenty of reasons why we should be manipulated by something appealing. Countless prey have been lured into demise thusly. So what appeals to us?

The answer to this is (at least this is my answer): "We like that which we can predict".

Many many pages have been written about how our brain processes information, but for me, the most convincing narrative is due to Jeff Hawkins, entrepreneur and neuroscientist, who I have written about several times in this blog (perhaps most notably in its very first installment--and second--, immortalizing our very first meeting).

What Jeff taught me (first in his breathtaking book "On Intelligence", and later in person) is that our brains are primarily prediction machines. Our brains predict the future, and we love it when we are right. We hate it when we are not. Let me give you the example I learned from Jeff's book, and which I have repeated countless times.

Walking, you may think, is easy. Those who have tried to make robots walk will tell you it is not. How do we do it, then? It turns out that bipedal walking relies on a complex sensory-motor interplay, and this is not the post to dwell on its details. But we know from experiment the following: if you lose the sense of touch in your feet, your gait will be severely affected. Basically, you'll stumble, rather than walk. Why is this?

It turns out that while you are happily conversing with the person next to you while walking, your brain subconsciously makes hundreds of predictions about what your sensory systems will experience next. And when it comes to walking, it makes predictions about the exact timing of the impact of the ground with your feet. Your brain does this for every step (but of course you do not realize that, because of the "unconscious" part). And every time that your foot experiences the ground (via your feet's sensors) at precisely the predicted time, your brain (subconsciously) says "Aaaah."

Your brain likes it when its predictions are fulfilled. It is happy when anticipation is actualized. Because when all is as predicted, then all is well. And when all is well, our brain does not need to waste precious energy on attending to details, when important things have to be addressed.

But what if there is a rut in the road, or a lump in the lane? In those cases, the anticipated impact of the foot with the road will be delayed (rut) or early (lump), and our brain immediately springs to attention. The prediction was not realized, and our brain (correctly) interprets this as a harbinger of trouble. If my prediction was incorrect (so argues your brain) in this instance, it might be incorrect in the next, and this means that we need to pay close attention to the situation at hand. And so, reacting to this alert, you now inspect the path you are trodding with much more care, to learn about the imperfections you hitherto ignored, and to learn to anticipate those too.

This little example, I'm sure you see immediately, is emblematic of how our brain processes all sensory information, including visual, and for the purpose of this blog post, auditory, information.

According to this view, our brain is happiest if it can anticipate the next sounds. And, when it comes to music, this predisposition of our brain begins to explain a lot about how we process music. After all, the structure of repetitions in almost all forms (I should say, Western) forms of music is uncanny. We like repetition, and we (dare I say it) think it is beautiful. Not too much repetition, mind you. But it is now clear that we like repetition because it makes music predictable. This is also why, we now realize much more clearly, we begin to like music after we have heard it a couple of times. And yes, some music is so simple, so derivative, so instantly recognized that we also like it instantly, upon first hearing. But perhaps we can now also understand that this is not the kind of music that requires any artistry.

How much repetition is best? Is there an optimum that has just enough repetition that we can barely predict it (creating the happiness of correct prediction) and evades the boredom of the obvious repetition that does not tax us, but annoys instead?  If this is true, shouldn't an evolutionary process be able to optimize it?

Yes, you can evolve music. You can do this yourself right now, by moseying over to evolectronica, where you can click on audio loops and rate them. The average rating will determine the  number of offspring any of the tunes in the population will obtain (suitably mutated, of course). Good tunes will prosper in this world. I have nothing to do with this site, but I did write about a paper that studied how such loops evolve. That paper was published in the Proceedings of the US National Academy of Sciences (link here), and turns out to be an interesting application of evolutionary genetics. My commentary, highlighting the importance of epistasis between genetic traits, was published in the same journal, at this link.

In the paper linked above, the authors use two traits to quantify the fitness of music: the "tonal clarity", and the 'rhythmic complexity". They find that while during early evolution the overall musical appeal of the tunes increases as each of these traits increase, the appeal then flattened out. During the time, neither clarity or appeal increased, seemingly holding each other back (see the qualitative rendition of the process in the figure below).
Evolutionary trajectory of tunes under selection (from [1]). Evolution does not reach maximum fitness because traits interact, and likely because other traits like predictability are not considered. Figure by B. Østman.
In the light of what we just discussed, maybe the failure to maximize fitness (in terms of musical appeal) is not surprising. Brains do prefer variation, but not too much. Average brains (such as mine) certainly do not prefer super-complicated rhythms. Yes, we like complexity, but complexity that remains predictable. We enjoy challenges, as long as they remain leisurely. Stravinsky is a challenge both in tonal clarity and rhythmic complexity. It can be mastered, but it requires formidable repetition.

So what is good music? That, it appears, will always remain in the brain of the beholder, simply because different brains have different capacities to predict. Some of us will love the simplest of tunes because they stick with us immediately. Some others love the challenge of trying to understand a piece that even after one hundred listens cannot be whistled.

I've whistled Stravinsky's Symphony of Psalms, so anything is possible!


[1] R. M. MacCallum, M. Mauch, A. Burt, and A. M. Leroi, Evolution of music by public choice. Proc. Natl. Acad. Sci. USA 109 (2012) 12081-12086.
[2] C. Adami, Adaptive walks on the fitness landscape of music. Proc. Natl. Acad. Sci. USA 109 (2012) 11898–11899.







Monday, June 19, 2017

What can the physics of spin crystals tell us about how we cooperate?

In the natural world, cooperation is everywhere. You can see it among people, of course, but not everybody cooperates all the time. Some people, as I'm sure you've heard or experienced, don't really care for cooperation. Indeed, if cooperation were something that everybody does all the time, we wouldn't even talk about it: we'd take it for granted.

But we cannot take it for granted, and the main reason for this has to do with evolution. Grant me, for a moment, that cooperation is an inherited behavioral trait. This is not a novelty mind you: plenty of behavioral traits are inherited. You may not be completely aware that you yourself have such traits, but you sure do recognize them in animals, in particular courtship displays and all the complex rituals associated with them. So if a behavioral trait is inherited, it is very likely selected for because it enhances the organisms's fitness. But the moment you think about how cooperation as a trait may have evolved, you hit a snag. A problem, a dilemma. 

If cooperation is a decision that promotes increased fitness if two (or more) individuals engage in it, it must be just as possible to not engage in it. (Remember, cooperation is only worth talking about if it is voluntary.) The problem arises when in a group of cooperators an individual decides not to cooperate. It becomes a problem because that individual still gets the benefit of all the other individuals cooperating with them, but without actually paying the cost of cooperation.  Obtaining a benefit without paying the cost means you get mo' money, and thus higher fitness. This is a problem because if this non-cooperation decision is an inherited trait just as cooperation is, well then the defector's kids (a defector is a non-cooperator) will do it too, and also leave more kids. And the longer this goes on, all the cooperators will have been wiped out and replaced by, well, defectors. In the parlance of evolutionary game theory, cooperation is an unstable trait that is vulnerable to infiltration by defectors. In the language of mathematics, defection--not cooperation--is the stable equilibrium fixed point (a Nash equilibrium). In the language of you and me: "What on Earth is going on here?" 

Here's what's going on. Evolution does not look ahead. Evolution does not worry that "Oh, all your non-cooperating nonsense will bite you in the tush one of these days", because evolution rewards success now, not tomorrow. By that reasoning, there should not be any cooperating going on among people, animals, or microbes for that matter. Yet, of course, cooperation is rampant among people (most), animals (most), and microbes (many). How come?

The answer to this question is not simple, because nature is not simple. There are many different reasons why the naive expectation that evolution cannot give rise to cooperation is not what we observe today, and I can't here go into analyzing all of them here. Maybe one day I'll do a multi-part series (you know I'm not above that) and go into the many different ways evolution has "found a way". In the present setting, I'm going to go all "physics" with you instead, and show you that we can actually try to understand cooperation using the physics of magnetic materials. I kid you not.

Cooperation occurs between pairs of players, or groups of players. What I'm going to show you is how you can view both of these cases in terms of interactions between tiny magnets, which are called "spins" in physics. They are the microscopic (tiny) things that macroscopic (big) magnets are made out of. In theories of ferromagnetism, the magnetism is created by the alignment of electron spins in the domains of the magnet, as in the picture below.
Fig. 1: Micrograph of the surface of a ferromagnetic material, showing the crystal "grains", which are areas of aligned spins (Source: Wikimedia).
If the temperature were exactly zero, then in principle all these domains could align to point in the same direction, so that the magnetization of the crystal would be maximal. But when the temperature is not zero (degrees Kelvin, that is), then the magnetization is less than maximal. As the temperature is increased, the magnetization of the crystal decreases, until it abruptly vanishes at the co-called "critical temperature". It would look something like the plot below.
Fig. 2: Magnetization M of a ferromagnetic crystal as a function of temperature T (arbitrary units). 
"That's all fine and dandy", I hear you mumble, "but what does this have to do with cooperation?" And before I have a chance to respond, you add: "And why would temperature have anything to do with how we cooperate? Do you become selfish when you get hot?"

All good questions, so let me answer them one at a time. First, let us look at a simpler situation, the "one-dimensional spin chain" (compared to the two-dimensional "spin-lattice"). In physics, when we try to solve a problem, we first try to solve the simplest and easiest version of the problem, and then we check whether the solution we came up with actually applies to the more complex and messier real world. A one-dimensional chain may look like this one:
Fig. 3: A one-dimensional spin chain with periodic boundary condition
This chain has no beginning or end, so that we don't need to deal with, well, beginnings and ends. (We can do the same thing with a two-dimensional crystal: it then topologically becomes a torus.)

So what does this have to do with cooperation? Simply identify a spin-up with a cooperator, and a spin-down with a defector, and you get a one-dimensional group of cooperators and defectors:

                                                C-C-C-D-D-D-D-C-C-C-D-D-D-C-D-C

Now, asking what the average fraction of C's vs. D's on this string is, becomes the same thing as asking what is the magnetization of the spin chain! All we need is to write down how the players in the chain interact. In physics, spins interact with their nearest neighbors, and there are three different values for "interaction energies", depending on how the spins are oriented. For example, you could write
$$E(\uparrow,\uparrow)=a,       E(\uparrow,\downarrow)=E(\downarrow,\uparrow)=b,  E(\downarrow,\downarrow)=c$$.
which you could also write into matrix form like so:
$$E=\begin{pmatrix} a & b\\ b& c\\ \end{pmatrix}$$
And funny enough, this is precisely how payoff matrices in evolutionary game theory are written! And because payoffs in game theory are translated into fitness, we can now see that the role of energy in physics is played by fitness in evolution. Except, as you may have noted immediately, that in physics the interactions lower the energy, while in evolution, Darwinian dynamics maximizes fitness. How can the two be reconciled?

It turns out that this is the easy part. If we replace all fitnesses by "energy=max_fitness minus fitness", then fitness maximization is turned into energy minimization. This can be achieved simply by taking a payoff matrix such as the one above, identifying the largest value in the matrix, and replacing all entries by "largest value minus entry". And in physics, a constant added (or subtracted) to all energies does not matter (remember when they told you in physics class that all energies are defined only in relation to some scale? That's what they meant by that.)

"But what about the temperature part? There is no temperature in game theory, is there?"

You're right, there isn't. But temperature in thermodynamics is really just a measure of how energy fluctuates (it's a bit more complicated, but let's leave it at that). And of course fitness, in evolutionary theory, is also not a constant. It can fluctuate (within any particular lineage) for a number of reasons. For example, in small populations the force that maximizes fitness (the equivalent of the energy-minimization principle) isn't very effective, and as a result the selected fitness will fluctuate (generally, decrease, via the process of genetic drift). Mutations also will lead to fitness fluctuations, so generally we can say that the rate at which fitness fluctuates due to different strengths of selection can be seen as equivalent to temperature in thermal physics.

One way to model the strength of selection in game theory is to replace the Darwinian "strategy inheritance" process (a successful strategy giving rise to successful "children-strategies") with a "strategy adoption" model, where a strategy can adopt the strategy of a competing individual with a certain probability. Temperature in such a model would simply quantify how likely it is that an individual will adopt an inferior strategy. And it turns out that "strategy adoption" and "strategy inheritance" give rise to very similar dynamics, so we can use strategy adoption to model evolution. And low and behold, the way the boundaries between groups of aligned spins change in magnetic crystals is precisely by the "spin adoption" model, also known as Glauber dynamics. This will become important later on.

OK, I realize this is all getting a bit dry. Let's just take a time-out, and look at cat pictures. After all, there is nothing that can't be improved by looking at cat pictures.  Here's one of my cat eyeing our goldfish:
Fig. 4: An interaction between a non-cooperator with an unwitting subject
Needless to say, the subsequent interaction between the cat and the fish did not bode well for the future of this particular fish's lineage, but it should be said that because the fish was alone in its bowl, its fitness was zero regardless of the unfortunate future encounter.

After this interlude, before we forge ahead, let me summarize what we have learned.

1. Cooperation is difficult to understand as being a product of evolution because cooperation's benefits are delayed, and evolution rewards immediate gains (which favor defectors).

2. We can study cooperation by exploiting an interesting (and not entirely overlooked) analogy between the energy-minimization principle of physics, and the fitness-maximizing principle of evolution.

3. Cooperation in groups with spatial structure can be studied in one dimension. Evolutionary game theory between players can be viewed as the interaction of spins in a one-dimensional chain.

4. The spin chain "evolves" when spins "adopt" an alternative state (as if mutated) if the new state lowers the energy/increases the fitness, on average.

All right, let's go a-calculating! But let's start small. (This is how you begin in theoretical physics, always). Can we solve the lowly Prisoner's Dilemma?

What's the Prisoner's Dilemma, you ask? Why, it's only the most famous game in the literature of evolutionary game theory! It has a satisfyingly conspiratorial name, with an open-ended unfolding. Who are these prisoners? What's their dilemma? I wrote about this game before here, but to be self-contained I'll describe it again.

Let us imagine that a crime has been committed by a pair of hoodlums. It is a crime somewhere between petty and serious, and if caught in flagrante, the penalty is steep (but not devastating). Say, five years in the slammer. But let us imagine that the two conspires were caught fleeing the scene independently, leaving the law-enforcement professionals puzzled. "Which of the two is the perp?", they wonder. They cannot convene a grand jury because each of the alleged bandits could say that it was the other who committed the deed, creating reasonable doubt. So each of the suspects is questioned separately, and the interrogator offers each the same deal: "If you tell us it was the other guy, I'll slap you with a charge of being in the wrong place at the wrong time, and you get off with time served. But if you stay mum, we'll put the screws on you." The honorable thing is, of course, not to rat out your compadre, because they will each get a lesser sentence if the authorities cannot pin the deed on an individual. But they also must fear being had: having a noble sentiment can land you behind bars for five years with your former friend dancing in the streets. Staying silent is a "cooperating" move, ratting out is "defection", because of the temptation to defect. The rational solution in this game is indeed to defect and rat out, even though for this move each player gets a sentence that is larger than if they both cooperated. But it is the "correct" move. And herein lies the dilemma.

A typical way to describe the costs and benefits in this game is in terms of a payoff matrix:
Here, b is the benefit you get for cooperation, and c is the cost. If both players cooperate, the "row player" receives b-c, as does the "column" player. If the row player cooperates but the column player defects, the row-player pays the cost but does not reap the reward, for a net -c. If the tables are reversed, the row player gets b, but does not pay the cost at they just defected. If both defect, they each get zero. So you see that the matrix only lists the reward for the row player (but the payoff for the column player is evident from inspection).

We can now use this matrix to calculate the mean "magnetization" of a one-dimensional chain of Cs and Ds, by pretending that \({\rm C}=\uparrow\) and \({\rm D}=\downarrow\) (the opposite identification would work just as well). In thermal physics, we calculate this magnetization as a function of temperature, but I'm not going to show you in detail how to do this. You can look it up in the paper that I'm going to link to at the end. Yes I know, you are so very surprised that there is a paper attached to the blog post. Or a blog post attached to the paper. Whatever.

Let me show you what this fraction of cooperators (or magnetization of the spin crystal) looks like:
Fig. 5: "Magnetization" of a 1D chain, or fraction of cooperators, as a function of the net payoff \(r=b-c\), for three different temperatures. 
You notice immediately that the magnetization is always negative, which here means that there are always more defectors than there are cooperators. The dilemma is immediately obvious: as you increase \(r\), meaning that there is increasingly more benefit than cost), the fraction of defectors actually increases. When the net payoff increases for cooperation, you would expect that there would be more cooperation, not less. But the temptation to defect increases also, and so defection becomes more and more rational.

Of course, none of these findings are new. But it is the first time that the dilemma of cooperation was mapped to the thermodynamics of spin crystals. Can this analogy be expanded, so that the techniques of physics can actually give new results?

Let's try a game that's a tad more sophisticated: the Public Goods game. This game is very similar to the Prisoner's Dilemma, but it is played by three or more players. (When played by two players, it is the same as the Prisoner's Dilemma). The idea of this game is also simple. Each player in the group (say, for simplicity, three) can either pay into a "pot" (the Public Good), or not. Paying means cooperating, and not paying (obviously) is defection. After this, the total Public Good is multiplied by a parameter that is larger than 1 (we will call it r here also), which you can think of as a synergy effect stemming from the investment, and the result is then equally divided to all players in the group, regardless of whether they paid in or not.

Cooperation can be very lucrative: if all players in the group pay in one and the synergy factor r=2, then each gets back two (the pot has grown to six from being multiplied by two, and those six are evenly divided to all three players). This means one hundred percent ROI (return on investment). That's fantastic! Trouble is, there's a dilemma. Suppose Joe Cheapskate does not pay in. Now the pot is 2, multiplied by 2 is 4. In this case each player receives 1 and 1/3 back, which is still an ROI of 33 percent for the cooperators, not bad. But check out Joe: he paid in nothing and got 1.33 back. His ROI is infinite. If you translate earnings into offspring, who do you think will win the battle of fecundity? The cooperators will die out, and this is precisely what you observe when you run the experiment. As in the Prisoner's Dilemma, defection is the rational choice. I can show this to you by simulating the game in one dimension again. Now, a player interacts with its two nearest neighbors to the left and right:
The payoff matrix is different from that of the Prisoner's Dilemma, of course. In the simulation, we use "Glauber dynamics" to update a strategy. (Remember when I warned that this was going to be important?) The strength of selection is inversely proportional to what we would call temperature, and this is quite intuitive: if the temperature is high, then changes are so fast and random that selection is very ineffective because temperature is larger than most fitness differences. If the temperature is small, then tiny differences in fitness are clearly "visible" to evolution, and will be exploited.

The simulations show that (as opposed to the Prisoner's Dilemma) cooperation can be achieved in this game, as long as the synergy factor r is larger than the group size:
Fig. 6: Fraction of cooperators in a computational simulation of the Public Goods game in one dimension. Here T is the inverse of the selection strength. As $T\to0$, the change from defection to cooperation becomes more and more abrupt. There are error bars, but they are too small to be seen. 
This graph shows that there is an abrupt change from defection to cooperation as the synergy factor is increased, and this change becomes more and more abrupt the smaller the "temperature", that is, the larger the strength of selection. This behavior is exactly what you would expect in a phase transition at a critical r=3, so it looks that this game also should be describable by thermodynamics.

Quick aside here. If you just said to yourself "Wait a minute, there are no phase transitions in one dimension" because you know van Hove's theorem, you should immediately stop reading this blog and skip right to the paper (link below) because you are in the wrong place: you do not need this blog. If, on the other hand, you read "van Hove" and thought "Who?", then please keep on reading. It's OK. Almost nobody knows this theorem.

Alright, I said we were going to do the physics now. I won't show you how exactly, of course. There may not be enough cat pictures on the Internet to get you to follow this. <Checks>. Actually, I take that back. YouTube alone has enough. But it would still take too long, so let's just skip right to the result.

I derive the mean fraction of cooperators as the mean magnetization of the spin chain, which I write as \(\langle J_z\rangle_\beta\). This looks odd to you because none of these symbols have been defined here. The J refers to a the spin operator in physics, and the z refers to the z-component of that operator. The spins you have seen here all point either up or down, which just means \(J_z\) is minus one or plus one here. The \(\beta\) is a common abbreviation in physics for the inverse temperature, that is, \(\beta=1/T\). And the angled brackets just mean "average".  So the symbol \(\langle J_z\rangle_\beta\) is just reminding you that I'm not calculating average fraction of cooperators. I am calculating the magnetization of a spin chain at finite temperature, which is the average number of spins-up minus spins-down. And I did all this by converting the payoff matrix into a suitable Hamiltonian, which is really just an energy function.

Mathematically, the result turns out to be surprisingly simple:
$$\langle J_z\rangle=\tanh[\frac\beta2(r/3-1)] \ \ \  (1)$$  
Let's plot the formula, to check how this compares to simulating game theory on a computer:
Fig. 7: The above formula, plotted against r for the different inverse temperatures $\beta$.

OK, let's put them side-by-side, the simulation, and the theory:
You'll notice that they are not exactly the same, but they are very close. Keep in mind that the theory assumes (essentially) an infinite population. The simulation has a finite population (1,024 players), and I show the average of 100 independent replicate simulations, that ran for 2 million updates, meaning that each of the sites of the chain was updated about 2,000 times each.

Even though they are so similar, how they were obtained could hardly be more different. The set of curves on the left was obtained by updating "actual" strings many many times, and recording the fraction of Cs and Ds on them after doing this 2 million times. (This, as any computational simulation you see in this post, was done by my collaborator on this project, Arend Hintze).  To obtain the curve on the right, I just used a pencil, paper, and an eraser. It shows off the power of theory, because once you have a closed-form solution such as Eq. (1) above, not only does this solution tell you some important things, but you can now imagine using the formalism to do all the other things that are usually done in spin physics, and that we never would have thought of doing if all we did was simulate the process.

And that's exactly what Arend Hintze and I did: we looked for more analogies with magnetic materials, and whether they can teach you about the emergence of cooperation. But before I show you one of them, I will mercifully throw in some more cat pictures. This is my other cat, the younger one. She is in a box, and no, Schrödinger had nothing to do with it. Cats just like to sit in boxes. They really do.
Our cat Alex has appropriated the German Adventskalender house
All right, enough with the cat entertainment. Let's get back to the story. Arend and I had some evidence from a previous paper [1] that this barrier to cooperation (namely, that the synergy has to be at last as large as the group size) can be lowered if defectors can be punished (by other players) for defecting. That punishment, it turns out, is mostly meted out by other cooperators, because being a defector and a punisher at the same time turns out to be an exceedingly bad strategy. I'm honestly not making a political commentary here. Honest. OK, almost honest.

And thinking about punishment as an "incentive to align", we wondered (seeing the analogy between the battle between cooperators and defectors, and the thermodynamics of low-dimensional spin systems) whether punishment could be viewed like a magnetic field that attempts to align spins in a preferred direction.

And that turned out to be true. I will spare you again the  technical part of the story (which is indeed significantly more technical), but I'll show you the side-by-side of the simulation and the theory. In those plots, I show you only one temperature \(T=0.2\), that is \(\beta=5\). But I show three different fines, meaning punishments with different strength of effect, here labelled as \(\epsilon\). The higher \(\epsilon\), the higher the "pain" of punishment on the defector (measured in terms of reduced payoff).

When we did the simulations, we also included a parameter that is the cost of punishing others. Indeed, doing so subtracts from a cooperator' net payoff: you should not be able to punish others without suffering a little bit yourself. (Again, I'm not being political here.) But we saw little effect of cost on the results, while the effect of punishment really mattered. When I derived the formula for the magnetization as a function of the cost of punishment \(\gamma\) and the effect of punishment \(\epsilon\), I found:
$$\langle J_z\rangle=\frac{1-\cosh^2(\beta\frac\epsilon4)e^{-\beta(\frac r3+\frac\epsilon2-1)}}{1+\cosh^2(\beta\frac\epsilon4)e^{-\beta(\frac r3+\frac\epsilon2-1)}} \ \ \        (2)$$
Keep in mind, I don't expect you to nod knowingly when you see that formula. What I want you to notice is that there is no \(\gamma\) there. But I can assure you, it was there during the calculation, but during the very last steps it miraculously cancelled out of the final equation, leaving a much simpler expression than the one that I had carried through from the beginning.

And that, dear reader, who has endured for so long, being propped up and carried along by cat pictures no less, is the main message I want to convey. Mathematics is a set of tools that can help you keep track of things. Maybe a smarter version of me could have realized all along that the cost of punishment \(\gamma\) will not play a role, and math would have been unnecessary. But I needed the math to tell me that (the simulations had hinted at that, but it was not conclusive).

Oh, I now realize that I never showed you the comparison between simulation and theory in the presence of punishment (aka, the magnetic field). Here it is (simulation on the left, theory on the right:

So what is our take-home message here? There are many, actually. A simple one tells you that to evolve cooperation in populations, you need some enabling mechanisms to overcome the dilemma. Yes, a synergy larger than the group size will get you cooperation, but this is achieved by eliminating the dilemma, because when the synergy is that high, not contributing actually hurts your bottom line. Here the enabling mechanism is punishment, but we need to keep in mind that punishment is only possible if you can distinguish cooperators from defectors (lest you punish indiscriminately). This ability is tantamount to the communication of one bit of information, which is the enabling factor I previously wrote about when discussing the Prisoner's Dilemma with communication. Oh and by the way, the work I just described has been published in Ref. [2]. Follow the link, or find the free version on arXiv.

A less simple message is that while computational simulations are a fantastic tool to go beyond mathematics--to go where mathematics alone cannot go [3]--new ideas can open up new directions that will open up new paths that we thought could only be pursued with the help of computers. Mathematics (and physics) thus still has some surprises to deliver to us, and Arend and I are hot on the trail of others. Stay tuned!


References

[1] A. Hintze and C. Adami, Punishment in Public Goods games leads to meta-stable phase transitions and hysteresis, Physical Biology 12 (2005) 046005.
[2] C. Adami and A. Hintze, Thermodynamics of evolutionary games. Phys. Rev. E 97 (2018) 062136
[3] C. Adami, J. Schossau, and A. Hintze, Evolutionary game theory using agent-based methods, Phys. Life Reviews. 19 (2016) 38-42.