Eqs

Tuesday, December 6, 2016

Can Life emerge spontaneously?

It would be nice if we knew where we came from. Sure, Darwin's insight that we are the product of an ongoing process that creates new and meaningful solutions to surviving in complex and unpredictable environments is great and all. But it requires three sine qua non ingredients: inheritance, variation, and differential selection. Three does not seem like much, and the last two are really stipulated semper ibi: There is going to be variation in a noisy world, and differences will make a difference in worlds where differences matter. Like all the worlds you and I know. So it is kind of the first ingredient that is a big deal: Inheritance.

Inheritance is indeed a bit more tricky. Actually, a lot more tricky. Inheritance means that an offspring carries the characters of the parent. Not an Earth-shattering concept per se, but in the land of statistical physics, inheritance is not exactly a given. Mark the "offspring" part of that statement. Is making offspring such a common thing?

Depends on how you define "offspring". The term has many meanings. Icebergs "calf" other icebergs, but the "daughter" icebergs are not really the same as the parent in any meaningful way.  Crystals grow, and the "daughter" crystals do indeed have the same structure as the "parent" crystals. But this process (while not without interest to those interested in the origins of life), actually occurs while liberating energy (it is a first-order phase transition).

The replication of cells (or people, for that matter) is very different from the point of view of statistical physics, thermodynamics, and indeed probability theory. Here we are going to look at this process entirely from the point of view of the replication of the information inherent in the cell (or the person). The replication of this information (assuming it is stored in polymers of a particular alphabet) is not energetically favorable. Instead, it requires energy, which explains why cells only grow if there is some kind of food around.

Look, the energetics of molecular replication are complicated, messy, and depend crucially on what molecules are available in what environment, at what temperature, pressure, salt concentrations, etc. etc. My goal for this blog post is to evade all that. Instead, I'm just going to ask how likely it is in general for a molecule that encodes a specific amount of information to arise by chance. Unless the information stored in the sequence is specifically about how to speed up the formation of another such molecule, however unlikely the formation of the first molecule was, the formation of two of them would be twice as unlikely (actually, exponentially so, but we'll get to that).

So this is the trick then: We are not interested in the formation of any old information by chance: we need the spontaneous formation of information about how to make another one of those sequences. Because, if you think a little bit about it, you realize that it is the power of copying that renders the ridiculously rare ... conspicuously commonplace. Need some proof for that? Perhaps the most valuable postage stamp on Earth is the famed "Blue Mauritius", a stamp that has inspired legendary tales and shortened the breath of many a collector, as there are (most likely) only two handfuls of those stamps left in the universe today.

Blue (left) and Red (right) Mauritius of 1847.  (Wikimedia).
But the original plate from which this stamp was printed still exists. Should someone endeavor to print a million of those, I doubt that they each would be worth the millions currently shelled out for one of those "most coveted scraps of paper in existence". (Of course experts would be able to tell apart the copies from the originals because of the sophistication of forensic methods deployed on such works and their forgeries.) But my points still stands: copying makes the rare valuable ... cheaply ordinary.

When the printing press (the molecular kind) has not yet been invented, what does it cost to obtain a piece of information? This blog post will provide the answer, and most importantly, provide pointers to how you could cheat your way to a copy of a piece of information that would be rare not just in thus universe, but a billion billion trillion more. Well, in principle.

How do you quantify rarity? Generally speaking, it is the number of things that you want, divided by the number of things there are. For the origin of life, let's imagine for a moment that replicators are sequences of linear heteropolymers. This just means that they are sequences of "letters" on a string, really. They don't have to self-replicate by themselves, but they have to encode the information necessary to ensure that they get replicated somehow. For the moment, let us restrict ourselves to sequences of a fixed length \(L\). Trust me here, this is for your own good. I can write down a more general theory for arbitrary length sequences that does nothing to help you understand. On the contrary. It's not a big deal, so just go with it.

How many sequences are there of length \(L\)? Exactly \(D^L\), of course (where \(D\) is the size of the alphabet). How many self-replicators are there among those sequences? That is the big question, we all understand. It could be zero, of course. Let's imagine it is not, and that the number is \(N_e\), where \(N_e\) is not zero. If there is a process that randomly assembles polymers of length \(L\), the likelihood \(P\) that you get a replicator in that case is
\(P=\frac{N_e}{D^L}\)       (1)
So far so good. What we are going to do now is relate that probability to the amount of information contained in the self-replicating sequence. 

That we should be able to do this is fairly obvious, right? If there is no information in a sequence, well than that sequence must be random. This means any sequence is just as good as any other, and \(N_e=N\) (all sequences are functional at the same level, namely not functional at all). And in that case, \(P=1\) obviously. But now suppose that every single bit in the sequence is functional. That means you can't change anything in that sequence without destroying that function, and implies that there is only one such sequence. (If there were two, you could make at a minimum one change and still retain function.) In that case, \(N_e=1\) and \(P=1/N\).

What is a good formula for information content that gives you \(P=1\) for zero information, and \(1/N\) for full information? If \(I\) is the amount of information (measured in units of monomers of the polymer), the answer is
\(P=D^{-I}.\)      (2)
Let's quickly check that. No information is \(I=0\), and \(D^0=1\) indeed.  Maximal information is \(I=L\) (every monomer in the length \(L\) sequence is information). And \(D^{-L}=1/N\) indeed. (Scroll up to the sentence "How many sequences are there of length \(L\)", if this is not immediately obvious to you.)

The formula (2) can actually be derived, but let's not do this here. Let's just say we guessed it correctly. But this formula, at first sight, is a monstrosity. If it was true, it should shake you to the bones. 

Not shaken yet? Let me help you out. Let us imagine for a moment that \(D=4\) (yeah, nucleotides!). Things will not get any better, by the way, if you use any other base. How much information is necessary (in that base) to self-replicate? Actually, this question does not have an unambiguous answer. But there are some very good guesses at the lower bound. In the lab of Gerry Joyce at the Scripps Research Institute in San Diego, for example, hand-designed self-replicating RNAs can evolve [1]. How much information is contained in them?
Prof. Gerald Joyce, Scripps Research Institute
We can only give an upper bound, because while it takes 84 bits to specify this particular RNA sequence, only 24 of those bits are actually evolvable. The 60 un-evolvable bits (they are un-evolvable because that is how the team set up the system) could, in principle, represent far less information than 60 bits. This may not be clear to you after reading this. But explaining this now would be distracting. I'll explain it further below instead.

Let's take this number (84 bits) at face value for the moment. How likely is it that such a piece of information emerged by chance? According to our formula (2), it is about
\(P\approx7.7\times 10^{-25} \)
That's a soberingly small likelihood. If you wanted to have a decent chance to find this sequence in a pool of RNA molecules of that length, you'd have to have about 27 kilograms of RNA. That's almost 60 pounds, for those of you that... Never mind.

The point is, wherever linear heteropolymers are assembled by chance, you're not gonna get 27 kilograms of that stuff. You might get significantly smaller amounts (billions of times smaller), but then you would have to wait a billion times longer. On Earth, there wasn't that much time (as Life apparently arose within half a billion years of the Earth's formation). Now, as I alluded to above, the Lincoln-Joyce self-replicator may actually code for fewer than the 84 bits it took to make it. But at the origin of this replicator was intelligent design. A randomly generated one may require fewer bits. We are left with the problem: can self-replicators emerge by chance at all?

This blog post is, really, about these two words: "by chance". What does this even mean?

When writing down formula (2), "by chance" has a very specific meaning. It means that every polymer to be "tried out" has an equal chance of occurring. "Occurring", in chemistry, also has a specific meaning. It means "to be assembled from existing monomers", and if each polymer has an equal chance to be found, then that means that the likelihood to produce any monomer is also equal.

For us, this is self-evident. If I want to calculate the likelihood that a random coin toss creates 10 heads in a row by chance, I take the likelihood of "heads" and take it to the power of ten. But what if your coin is biased? What if it is a coin that lands on head 60% of the time? Well then: in that case, the likelihood to get ten heads in a row is not 1 in 1,024 anymore but rather \((0.6)^{10}\), a factor of about 6.2 larger. This is quite a gain given such a small change in likelihood for a single toss (from 0.5 to 0.6). But imagine that you are looking for 100 heads in a row. The same change in bias now buys you a factor of almost 83 million! And for a sequence of 1,000 heads in a row, you are looking at an enhancement factor of .... about \(10^{79}\).

That is the power of bias on events with small probabilities. Mind you, getting 100 heads in a row is still a small probability, but gaining almost seven orders of magnitude is not peanuts. It might be the difference between impossible and... maybe-after-all-possible. Now, how can this be of use in the origin of life?

As I explained, formula (2) relies on assuming that all monomers are created equally likely, with probability \(1/D\). When we think about the origin of life in terms of biochemistry, we begin by imagining a process that creates monomers, which are assembled into those linear heteropolymers, and then copied somehow. (In biochemical life on Earth, assembly is done in a template-directed manner, which means that assembly and copying are one and the same thing). But whether assembly is template-directed or not, how likely is is that all monomers occur spontaneously at the same rate? Any biochemist will tell you: extremely unlikely. Instead, some of the monomers are produced spontaneously at one rate, and others at different rate. And these rates depend on local circumstances, like temperature, pH level, abundance of minerals, abundance of just about any element as it turns out. So, depending on where you are on a pre-biotic Earth, you might be faced with wildly different monomer production rates.

This uneven-ness of production can be viewed as a D-sided "coin" where each of the D sides has a different probability of occurring. We can quantify this uneven-ness by the entropy that a sequence of such "coin" tosses produces. (I put "coin" in quotes because a D-sided coin isn't a coin unless D=2. I'm just trying to avoid saying "random variable" here.) This entropy (as you can gleam from the Information Theory tutorial that I've helpfully created for you, starting here) is equal to the length of the sequence if each monomer indeed occurs at rate 1/D (and we take logs to base D), but is smaller than the length if the probability distribution is biased. Let's call \(H_a\) the average entropy per monomer, as determined by the local biochemical constraints. And let's remember that if all monomers are created at the same exact rate, \(H(a)=1\), (its maximal value), and Eq. (2) holds. If the distribution is uneven, then \(H(a)<1\). The entropy of a spontaneously created sequence is then \(L\times H(a)\), which is smaller than \(L\). In a sense, it is not random anymore, if by random we understand "each sequence equally likely". How could this help increase the likelihood of spontaneous emergence of life?

Well, let's take a closer look at the exponent in Eq. (2), the information \(I\). Under certain conditions that I won't get into here, this information is given by the difference between sequence length \(L\) and entropy \(H\)
\(I=L-H.\)   (3)
That such a formula must hold is not very surprising. Let's look at the extreme cases. If a sequence is completely random, then \(H(a)=1\), and therefore \(H=L\), and therefore \(I=0\). Thus, a random sequence has no information. On the opposite end, suppose there is only one sequence that can do the job, and any change to the sequence leads to the death of that sequence. Then, the entropy of the sequence (which is the logarithm of the number of ways you can do the job), must be zero. And thus in that case the sequence is all information: \(I=L\).  While the correct formula (3) has plenty more terms that become important if there are correlations between sites, we are going to ignore them here.

So remember that the probability for spontaneous emergence of life is so small because \(I\) is large, and it is in the exponent. But now we realize that the \(L\) in (3) is really the entropy of a spontaneously created sequence, and if \(H(a)<1\), then the first term is \(L\times H(a)<L\). This can help a lot because it makes \(I\) smaller. It helps a lot because the change is in the exponent. Let's look at some examples.

We could first look at English text. The linear heteropolymers of English are strings of the letters a-z (let's just stick with lower case letters and no punctuation for simplicity). What is the likelihood to find the word \({\tt origins}\) by chance? If we use an unbiased typewriter (our 26-sided coin), the likelihood is \(26^{-7}\) (about 1 in 8 billion), as \({\tt origins}\) is a 7-mer, and each mer is information (there is only one way to spell the word \({\tt origins}\)). Can we do better if our typewriter is biased towards English? Let's find out. If you analyze English text, you quickly notice that letters occur at different frequencies: e more often that t, which occurs more often than a, and so forth. The plot below is the distribution of letters that you would find.

Letter distribution of English text
The entropy-per-letter of this distribution is 0.89 mers. Not very different from 1, but let's see how it changes the 1 in 8 billion odds. The biased-search chance is, according to this theory, \(P_\star=26^{7\times 0.89}\), which comes out about 1.5 per billion: an enhancement of more than a factor 12. Obviously, the enhancement is going to more pronounced the longer the sequence. We can test this theory in a more appropriate system: self-replicating computer programs.

That you can breed computer programs inside a computer is nothing new to those who have been following the field of Artificial Life. The form of Artificial life that involves self-replicating programs is called "digital life" (I have written about the history of digital life on this blog), and in particular the program Avida. For those who can't be bothered to look up what kind of life Avida makes, let's just focus on the fact that avidians are computer programs written in a language that has 26 instructions (conveniently abbreviated by the letters a-z), executed on a virtual CPU (you don't want digital critters to wreak havoc on your real CPU, do you?) The letters of these linear heteropolymers have specific meanings on that virtual CPU. For example the letter 'x' stands for \({\tt divide}\), which when executed will split the code into two pieces.

Here's a sketch of what this virtual CPU looks like (with a piece of code on it, being executed)
Avidian CPU and code (from [2]). 
When we use Avida to study evolution experimentally, we seed a population with a hand-written ancestral program. The reason we do this is because self-replicators are rare within the avidian "chemistry": you can't just make a random program and hope that it self-replicates! And that is, as I'm sure has dawned on the reader a while ago, where Avida's importance for studying the origin of life comes from. How rare is such a program?

The standard hand-written replicator is a 15-mer, but we are sure that not all 15 mers are information. If they were, then its likelihood would be \(26^{-15}\approx 6\times 10^{-22}\), and it would be utterly hopeless to find it via a random (unbiased) search. It would take about 50,000 years if we tested a million strings a second, on one thousand computers in parallel. We can estimate the information content by sampling the ratio \(\frac{N_e}{26^{15}}\), that is, instead of trying out all possible sequences, we try out a billion, and take the fraction of self-replicators to be representative of the overall fraction. (If we don't find any, try ten billion, and so forth).

When we created 1 billion 15-mers using an unbiased distribution, we found 58 self-replicators. That was unexpectedly high, but it pins down the information content to be about
\(I(15)=-\log_D(58\times 10^{-9})\approx 5.11 \pm 0.04 \) mers.
The 15 in \(I(15)\) reminds us that we were searching within 15 mer space only. But wait: about 5 mers encoded in a 15 mer? Could you write a self-replicator that is as short as 5 mers?

Sadly, no. We tried all 11,881,367 5-mers, and they are all as dead as doornails. (We test those sequences for life by dropping them into an empty world, and then checking whether they can form a colony.) 

Perhaps 6-mers, then? Nope. We checked all 308,915,776 of them. No sign of life. We even checked all 7-mers (over 8 billion of them). No colonies. No life. 

We did find life among 8-mers, though. We first sampled one billion of them, and found 6 unique sequences that would spontaneously form colonies [2]. That number immediately allows us to estimate the information content as 
                       \(I(8)=-\log_D(6\times 10^{-9})\approx 5.81 \pm 0.13 \) mers,
which is curious. 

It is curious because according to formula (2) waaay above, the likelihood of finding a self-replicator should only depend on the amount of information in it. How can that information depend on the length of sequence that this information is embedded in? Well it can, and you'll have to read the original reference [2] to find out how. 

By the way, we later tested all sequences of length 8 [3], giving us the exact information content of 8-mer replicators as 5.91 mers.  We even know the exact information content of 9-mer replicators,but I won't reveal that here. It took over 3 months of compute time to get this, and I'm saving it for a different post.  

But what about using a biased typewriter? Will this help in finding self-replicators? Let's find out! We can start by using the measly 58 replicators found by scanning a billion 15-mers, and making a probability distribution out of it. It looks like this:
Probability distribution of avidian instructions among 58 replicators of L=15. The vertical line is the unbiased expectation.
It's clear that some instructions are used a lot (b,f,g,v,w,x). If you look up what their function is, they are not exactly surprising. You may remember that 'x' means \({\tt divide}\). Obviously, without that instruction you're not going to form colonies. 

The distribution has an entropy of 0.91 mers. Not fantastically smaller than 1, but we saw earlier that small changes in the exponent can have large consequences. When we searched the space of 15 mers with this distribution instead of the uniform one, we found 14,495 replicators among a billion tried, an enhancement by a factor of about 250. Certainly not bad, and a solid piece of evidence that the "theory of the biased typewriter" actually works.  In fact, the theory underestimates the enhancement, as it predicts (based on the entropy 0.91 mers) an enhancement of about 80 [2].

We even tested whether taking the distribution generated by the 14,495 replicators, which certainly is a better estimate of a "good distribution", will net even more replicators. And it does indeed. Continuing like this allows your search to zero in on the "interesting" parts of genetic space with more laser-like fashion, but the returns are, understandably, diminishing.

What we learn from all this is the following: do not be fooled by naive estimates of the likelihood of spontaneous emergence of life, even if they are based on information theory (and thus vastly superior to those who would claim that \(P=D^{-L}\)). Real biological systems search with a biased distribution. The bias will probably go "in the wrong direction" in most environments. (Imagine an avidian environment where 'x' is never made.) But in a few of the zillion of environments that may exist on a prebiotic Earth, a handful of them might have a distribution that is close to the one we need. And in that case, life suddenly becomes possible. 

How possible? We still don't know. But at the very least, the likelihood does not have to be astronomically small, as long as nature will use that one little trick: whip out that biased typewriter, to help you mumble more coherently. 

[1] T. A. Lincoln and G. F. Joyce, Self-sustained replication of an RNA enzyme, Science 323, 1229–1232, 2009.
[2] C. Adami and T. LaBar, From entropy to information: Biased typewriters and the origin of life. In: “From Matter to Life: Information and Causality” (S.I. Walker, P.C.W. Davies, and G. Ellis, eds.) Cambridge University Press (2017), pp. 95-113. Also on arXiv
[3] Nitash C.G., T. LaBar, A. Hintze, and C. Adami, Origin of life in a digital microcosm. Phil. Trans. Roy. Soc. A 375: 20160350. 














Wednesday, March 30, 2016

Ten Years (give or take) in the Evolution of a Protein

How do proteins evolve? Generally the answer is "Very slowly!". But sometimes, protein evolution can be blazingly fast. How fast, you ask? Ask instead the lizards of the South Adriatic Sea!

OK, where is the South Adriatic Sea? you ask. You should really be asking "What about those lizards?", but here we go. The Adriatic Sea separates Italy from the Balkan peninsula, as in the picture below (upper left corner). So in 1971, researchers decided to take a species of lizards (known as Podarcis sicula, the Italian wall lizard) found on the small island Kopiste, and transplant them to the neighboring small island Mrcaru. 
Adriatic Sea (top left). Pod Kopiste is the tiny island on the left, and Pod Mrcaru is to its right. The larger island is the inhabited Lastovo (credit: Google World)
don't know why they did it. They transplanted five adult breeding pairs, so they were intent on creating havoc, no doubt. Or an experiment, perhaps? But the Croatian War of Independence intervened, and the lizards were all but forgotten until a team returned in 2004 to Mrcaru to look at the local lizards there. And they found that the offspring of the ten had essentially overrun the island, and changed in profound ways. On Kopiste, the lizards ate mostly insects. On Mrcaru, instead, there was an abundance of plants for food, and comparatively fewer insects. The insect-eating lizards, however, were not adapted to digest plants, something that requires a different stomach structure that ensures that the plants stay in the intestine long enough to digest the plant cellulose. If it does not stay in the stomach, you can't get the energy from it. It turns out that the lineage on Mrcaru evolved so-called cecal valves, something that does not usually occur in lizards. The cecal valves close off parts of the stomach, so that some types of bacteria could ferment the cellulose in there. This is stunning only because this adaptation took just over thirty years. It turns out that other body characteristics had changed too: longer, wider, and taller heads that translate in larger forces to bite down on the tough fibrous plants. The lizards needed to survive: this is how they did it.

Can proteins really evolve that fast? It seems that the answer is: "If you really really have to, then yes". What a pity that we haven't been able to sample the sequences of the proteins involved over the thirty some years. Wouldn't that give us a fantastic window on protein evolution? But how can you know that a protein is about to undergo fundamental changes?

It turns out that you can, if you modify the environment in such a way that it becomes unlivable for the organism involved, and you then look for those types that survive the slaughter. Sounds immoral? But we do it all the time, when we give drugs to fight viral infections! The example I will use is the evolution of drug resistance in a protein of the Human Immunodeficiency Virus (HIV), the virus that causes AIDS

AIDS broke out into the Western population in 1981, but it took fourteen years to develop the first effective anti-viral treatment: a drug that inhibits a crucial piece of the HIV machinery: the protease. To understand the drug and what the protease does, we have to spend some time with the somewhat unusual lifecycle of HIV. It is a retrovirus, which means that its genetic material is RNA, not DNA. The virus infects cells that are crucial in people's ability to fight infections, which explains to a large extent why it is so deadly: it attacks precisely the system that is supposed to save you. The figure below gives you an idea of the virus's life cycle.

HIV life cycle. Source: Wikimedia
After the virus capsid (the shell that encapsulates the virus RNA along with a few necessary molecules) binds to the cell (here, a T-cell, which is a type of white blood cell that plays a central role in the immune system), the virus injects the capsid's material into the cell. Along with the RNA in the capsid comes an enzyme called the "reverse transcriptase", which is able to make a DNA copy from the RNA material, and this DNA copy is subsequently inserted ("integrated") into the host cell's DNA. Now, the DNA of every cell is constantly transcribed and then translated into proteins, and the same is going to happen to the foreign DNA that was inserted into the host cell. Willy-nilly, the cell makes proteins from the virus's information: it is making virus parts. But it turns out that unlike your own proteins that have stop codons to indicate where a protein ends, the foreign DNA (made from virus RNA) does not have those. As a consequence, the cellular machinery produces one long long protein, called a "polyprotein". It is, of course, totally unusable in this form. It must be cleaved (meaning "cut") into the functional pieces with a knife. Where can the virus find such a knife? Well, it makes it itself, and it carries a copy with it in the capsid. Armed with this knife, the polyprotein is cut into all the pieces that are needed to assemble another functional capsid (including the protease and the reverse transcriptase) and packaged with copies of the RNA genetic code (which the cell helpfully made for free) into new capsids. The action of the knife (called a "protease") is shown in the lower left corner of the life cycle diagram above.

"If I could just blunt this knife", is what HIV researchers were asking themselves, and they found just the way to do it. Take a look at the molecular structure of the protease in the figure below. 

The HIV molecule is a dimer (meaning it is made out of two copies of the same protein that bind to each other, here in cyan and green). Two particular amino acids that are important in the activity of the molecule are colored red and purple
See the hole in the middle, surrounded by the red and purple amino acids? That's where the polyprotein fits in, and the protease cuts it like a cigar cutter at specific points that are recognized by the red and purple residues. How do you inactivate the cigar cutter? You stick something in there to block the hole! Indeed, this is how all protease inhibitors--that is, drugs that inhibit the activity of the protease, work. 

When these drugs hit the market, they were replacing older drugs that had nasty side effects. And these new drugs worked like magic! The only trouble was that the virus was not going to capitulate that easily. Indeed, researchers had created just the scenario that we were calling for above: change the environment in such a manner that makes it unlivable for an organism, and see how it can cope. 

HIV protease inhibitors work really well (in particular if associated with another drug, the reverse transcriptase inhibitor), which means that the virus population all but goes extinct. The important modifier here is "all but". Instead of going extinct, it goes into hiding, and researcher don't really know where. As you can imagine, finding this hiding spot (and how to coerce the virus to leave it) is a major effort of HIV research today. A problem arises if a patient forgets to take their antiviral drugs. The virus comes out, starts replicating (slowly), and the high mutation rate of the virus creates the opportunity to evolve quickly. HIV can evolve resistance to a protease inhibitor within two weeks. This is not altogether surprising, as when unchecked the virus creates an enormous number of copies (correct and flawed) of the virus every day, so that every single mutation of the nearly 10,000 nucleotide genome is tried multiple times every day, and every pair of mutations a few times. This is enough to cause rapid evolution, and if a single virus finds a way to survive the massacre the drug unleashes, that virus will grow in numbers and create the seeds of a new destructive force that the inhibitor is unprepared for. When resistance emerges, researchers go back to the lab to develop a new type of protease inhibitor, a new way to dull the knife. While it is effective for a while, evolution ultimately keeps up, and finds a way to evade it. How do we stop this maddening race?

The history of this fight between the virus and the drugs that attempt to keep it at bay is documented, as it occurred after we had figured out how to sequence stuff. Every paper that relied on patient data, and every drug trial, was asked to deposit their sequence data (namely the sequence of the virus they extracted from their patients) and deposit it on publicly accessible databases. This sequence data became the "fossils" of this evolutionary history, and it is made from the viral RNA of patients that fought this fight, on the frontline. Many of those did not survive the fight, but they bequeathed  their virus's sequence data to us for posterity so that we can, perhaps, save the next generation.

Patients that were enrolled in a multitude of drug trials would have the virus's information sequenced, and these records ultimately found their way into Stanford University's HIV resistance database (HIVdb).  All sequence data is usually deposited in central repositories such as Genbank, but Stanford's HIVdb creates an enormous service by curating the HIV data on a single site, and developing tools and algorithms to investigate that data. In my lab, I decided that we should mine this "fossil record" to understand how HIV is adapting to, and attempting to evade, the drugs thrown at it. The evolution of drug resistance in HIV can thus be seen as a long-term evolution experiment (LTEE), only compared to the LTEE is it short, and we do not have frozen isolates.  The Stanford database is a compendium that allows users to query all sorts of information about sequence, type, and resistance profile. For our purposes, namely to study how the sequence evolves, we need only two things: sequences, and whether the patients who donated the sequence were receiving anti-viral drugs. 

To understand how evolution is affecting a protein, we have to discuss the concept of the "fitness landscape". Entire series of blog posts can be written about this concept, but we don't have that kind of space here. Broadly speaking, a fitness landscape is an idealized picture of how the fitness of an organism depends on either the traits or the genome that determine the organism. Here, we will focus on the mapping between sequence and fitness, not traits and fitness. In such a picture, the fitness is the "elevation", and the sequence is the coordinate. If you search for "fitness landscape" you will almost invariably end up with a picture that originates from my lab. Give it a try! You might for example find this: 

A rugged fitness landscape with different evolutionary paths. Credit: Randal S. Olson
This is a rendering of a rugged fitness landscape that my student (at the time) Randy Olson created for a manuscript that we ended up not finishing.  The general idea depicted there is that mutation-by-mutation you could move peak-to-peak, or if this is not possible, you might choose a path that tries to maximize fitness, even though you may have to walk in the valleys between peaks for a (short) while.

If you consider a protein landscape (the z-axis values in the landscape represent how well a protein is doing its job) then most proteins occupy a peak, because if they did not, then mutations would move them closer to the peak until there are no more ways to improve the protein. Drugs that attack the function of a protein (such as the protease inhibitor blunting the protease as described above) change the landscape profoundly: you can imagine that they simply erase the peak. You might think that this would kill the organism (if the protein is essential). Due to the high mutation rate of the HIV virus, there are actually a lot of variants that exist in the population. Many of them are completely defective, but some of them "live" at the edges of the fitness peak that the un-mutated protein occupies. Because they are barely functional they usually do not play a role. But when the main peak is eliminated, the sequences at the fringes may be the only ones to survive. They make a virus that replicates very slowly, but replicate it does. And thus evolution can continue: if there is any way to improve the function of the protein, that path will be taken. The protein will find a distant peak to climb, and the virus is resurrected: it has evolved resistance to the drug.

Even though research has discovered more and more potent anti-viral drugs, which attack different proteins and are thus more effective than any single drug can be, the virus ultimately will evade them, in particular if the patient forgets to take the drug so that the virus can replicate faster and thus accumulate mutations faster. Is there no way to stop this?

In research that has just appeared in the journal PLoS Genetics, my colleague Aditi Gupta (now a postdoctoral researcher at the New Jersey Medical School of Rutgers University)  and I studied how the virus adapts to more and more complex drug environments over a span of almost 10 years. We studied the evolution of the HIV protease (the molecule you encountered above) using sequences deposited in the Stanford database. We found two things: First: in patients that did not receive drugs, the protease molecule was not evolving. Second, in patients that did receive drugs, the protease molecule was evolving quickly, but it evolved in a peculiar way: by storing information in epistatic interactions, rather than in residue changes.

Ok Ok, I realize that this was a mouthful. First, what was that bit about information? You see, for a protein (as well as all life, in the end) everything is about information. A protein that "does its job" has information about the environment within which it is active. Its sequence encodes that information, but it is information about that environment. You change the environment, and what used to be information may not be information anymore. Information is contextual (as I argue in a series of blog posts that starts here). The evolution of drug resistance, in the light of information theory, is then just the quest to "learn" (that is acquire information) about that new world, the new context. 

And it so happens that you can store information in different ways in a sequence. You can certainly store it in the individual symbols that make up the sequence. That is how we usually think of storing information. It is less well-known that you can also store information in the correlations between symbols. I don't know of a good way to make this intuitive. Information is something that allows you to make predictions (as I argue in the above-mentioned series). A single site being an 'A' (instead of a 'C', 'G', or 'T') might be predictive of a particular environmental state. But you can imagine that a site being an 'A' as long as a a very particular other sited is a 'G' can also be predictive, as long as the only pairs that are allowed are 'AG' and 'GA'. This kind of "dependence" between sites is known as "epistasis" in genetics. There is an enormous amount of literature about epistasis in genetics (as there should be, as I believe it to be the central concept in evolutionary biology) but this post is already too long, so I must refer you to the wiki pages to learn more.

What I argue thus, in a nutshell, is that you can store information in substitutions (of residues) or you can store it in epistatic interactions between residues. What Aditi Gupta and I found by analyzing the "fossil record" of almost ten years of protein evolution is that the protease mostly stored information in the linkages between residues. 

I know what you are asking: "Why would a protein do that, and what are the consequences?" These are good questions. Let's investigate them one by one. 

Storing information in "correlated changes" (epistatic interactions) is a necessity if you are rushed. The reason is technical, and you are forgiven if you don't grasp the entirety of the argument. Single substitutions (the "simple" way to store information) has serious repercussions for a protein, as substitutions (on average) destabilize the protein. Yes, you do remember that a protein has to fold into its structural conformation, and it doesn't just do that willy-nilly (that's the second time I used that construction, isn't it?). This fold has to be energetically favored, and changes in the residue usually make things worse for those energetics. This isn't a problem if a substitution makes it just a little harder to fold, and if at the same time you have enough time to correct for that problem, by making a compensating substitution somewhere else, later. But if time is of the essence (as when the protein just found its peak utterly annihilated) you can't just substitute a residue, because you probably have to substitute another too, and that would make the protein not fold. A non-folded protein is a dead protein. It cannot wait for a substitution that will save it.

But as I pointed out, there is another way to "learn" (that is, acquire information) by changing the way residues interact. Such changes affect the folding free energy of the protein very little, and as a consequence this is the favored mode of information acquisition if time is of the essence. What we find in the fossil record is that, indeed, this is how evolution proceeds.

What are the consequences? Well, they are likely to be profound. If a protein evolves to store information in linkages between residues, that implies that the protein becomes more and more constrained. After doing this for a a while, there aren't that many residues anymore that are free to vary, as there are so many relative states that need to be satisfied. In theory, this means that the protein is evolving itself into a corner from which there may be no escape. What it means is that the protein inhabits a fitness landscape that becomes more and more rugged the more interactions are being locked in between residues. 

Let me show you some of the technical evidence that appears in the paper. In the figure below, you see something we call "sum of pairwise MI", where MI stands for "mutual information". You can think of that measure as representing the amount of information stored in the linkages between residues in the protein. As a matter of fact, you shouldn't just think of it in those terms, it is precisely that. This measure is increasing in patients that respond to drug treatment (blue triangles), but does not change in patients that are not receiving those drugs (but really are wishing they would).


Pairwise epistasis, measured in terms of mutual information, as a function of time in the HIV-1 protease. Triangles: patients taking anti-viral drugs. Circles: patients not taking any anti-viral drugs.
What this plot shows is that the proteins that are adapting to drugs do so by creating functional links between residues, and this evolution persists as more and more sophisticated drugs are introduced. But the trend seems to be stalling within the last three years. Could it be that the virus is becoming so constrained that further adaptation is impossible?

I wish I knew the answer to this question, but I don't. At least from the time course we investigated in this paper, there is no evidence that the protein has slowed its evolution. But I must caution that we only investigated the evolution of the HIV protease for the years 1998-2006. There is sequence data for the years after 2006, of course, but our study was explicitly comparing the response of patients that took anti-viral drugs to those that did not. And after 2006, you could not find enough sequences from patients not taking anti-viral drugs in the database to make statements that were statistically sound. We understand the reason for this, of course, as the anti-viral drugs had become so potent that it would be morally reprehensible to withhold them from a control group. 

It is possible that a slow-down of evolution can be discerned in the sequences of patients that were exposed to anti-viral drugs post 2006. That would be a stunning development, which would have profound implications for the evolution of drug resistance in HIV. The data is there. Who wants to analyze it?

The study I discuss was published as:


A. Gupta and C. Adami, "Strong selection significantly increases epistatic interactions in the long-term evolution of a protein". PLoS Genetics 12 (2016) 1005960.




















Friday, March 4, 2016

On quantum measurement (Part 7: There goes the Copenhagen Interpretation)

So this is the final installment of the "On quantum measurement" series. You may have arrived here by reading all previous parts in one sitting (I've heard of such feats in the comments). This is the apotheosis: what all these posts have been gearing up to. If, for some reason that only the Internets know, you have arrived here without the benefit of the first six installments, I'll provide you with the link to the very first installment, but I won't summarize all the posts, out of deference to all the readers who got here the conventional way. 

The Copenhagen Interpretation of quantum mechanics, as I'm sure all of you that have arrived to Part 7 are aware of, is a view of the meaning of quantum mechanics promulgated mostly by the Danish physicist Niels Bohr, and codified in the 1920s, that is, the "heydays" of quantum physics. Quantum mechanics can be baffling to be sure, and there are multiple attempts to square what we observe experimentally with our common sense. The Copenhagen Interpretation is an extreme view (in my opinion) of how to make sense of the reflection of the quantum world in our classical measurement devices. So, at its very core, the Copenhagen Interpretation muses about the relationship of the classical to the quantum world.

As a young student of quantum mechanics in the early eighties, I was a bit baffled by this right away. When the true underlying physics is quantum (I mused), and that therefore the classical world is just an approximation of the quantum, how can we have "theorems" that codify the relationship between quantum and classical systems? 

I won't write a treatise here about the Copenhagen Interpretation. I've already linked the Wikipedia article about it, which should get those of you who are not yet groaning up to speed. I'll just list the two central "things" that are taught just about everywhere quantum mechanics is taught, and that can be squarely traced back to Bohr's school. 

1. Physical systems  do not have definite properties prior to being measured, but instead should be described by a set of probabilities
2. The act of measurement changes the quantum system, so that it takes on only one of the previous possibilities (wave function collapse, or reduction)

Yes, the general understanding of the Copenhagen Interpretation is more multi-faceted, but for the purpose of this post I will focus on the collapse of the wave function. When I first fully understood what that meant, it was immediately clear to me that this was just a load of crap. I knew of no law of physics that could engender such a collapse, and it violated everything I believed in (such as conservation of probabilities). You who reads this blog so ardently already know this: it makes no sense from the point of view of information theory. 

Now, quantum information theory did not exist around the time of Bohr (and Heisenberg, who must carry some of the blame for the Copenhagen Interpretation). And maybe the two should get a pass for this simple reason, except for the fact that John von Neumann, as I have pointed out in another post), had the foundations of quantum information theory already worked out in 1932, two years after the first "definitive" treatise on the "Copenhagen spirit" was published by Heisenberg.

So you, faithful reader, come to this post well prepared. You already know that Hans Bethe told me and my colleague Nicolas Cerf that we showed that wave functions don't collapse, you know that John von Neumann almost discovered quantum information theory in the 30s, that quantum measurement is very different from its classical counterpart because copying is not allowed in the quantum world. You know where Born's rule comes from, and you pondered the utility of quantum Venn diagrams. You were promised a discussion of Schrödinger's cat, but that never materialized. Instead, you were given a discussion of the quantum eraser. Arguably, that is a more interesting system, but I understand if you are miffed. But to make it up, now we get to the quantum grand-daddy of them all. I will show you that the Copenhagen interpretation is not only toast theoretically, but that it is possible to design experiments that will show this. Or they will show that I'm full of the aforementioned crap. Either way, it is going to be exciting. 

In this post, I will reveal to you the mathematical beauty and elegance of consecutive measurements performed on the same quantum system.  I will also show you how looking at three measurements in a row (but not two), will reveal to you that the Copenhagen Interpretation is now history, ripe for the trash heap of ill-conceived concepts in theoretical physics. All of what I'm going to tell you is an extension of the picture that Nicolas Cerf and I wrote about in 1996, and which Bethe understood immediately after we showed him our results, while it took us six months to understand what he told us. But it is an extension that took some time to clarify, so that the indictment of Bohr (and implicitly Heisenberg) and the collapse picture of measurement is  unambiguous, and most importantly, experimentally verifiable. 

Let's get right into the thick of things. But getting started may really be the hardest thing here. Say you want to measure a quantum system. But you know absolutely nothing about it. How do you write such a quantum system?

In general, people write arbitrary quantum states like this: \(|Q\rangle=\sum_i\alpha_i |i\rangle\), with complex coefficients \(\alpha_i\) that satisfy \(\sum_i|\alpha_i|^2=1\). But you may ask, "Who told you what basis to write this quantum state in? The basis states \(|i\rangle\), I mean". After all, the amplitudes \(\alpha_i\) only make sense with respect to a particular basis system (if you transform this basis to another, as we will do a lot in this post) it changes the coefficients. "So haven't you already assumed a lot by writing the quantum state like that?" (You may remember questions like that from a blog post on classical information, and this is no accident). 

If you think about this problem for a little while, you realize that indeed the coefficients and the basis you choose are crucial. Just as in classical information theory where I told you that the entropy of a system was undefined, and determined only by the measurement device that you were about to use to learn about it, the state of an arbitrary quantum system only makes sense relative to the quantum states of the detector that you are about to use to measure it. This is, essentially, what is at the heart of the "relative state" formalism of quantum mechanics, due to Everett, of course. That fellow Hugh Everett does not get as much recognition as he deserves, so I'll let you gaze at him for a little while.
H. Everitt III (1930-1982) Source: Wikimedia
He cooked up his theory as a graduate student, but as nobody believed his theory at the time, he left quantum physics and became a defense analyst. 

You may expect me to launch into a description and discussion of the "many-worlds" interpretation of quantum mechanics, which became a fad in the 1970s, but I won't. It is silly to call the relative-state picture a "many-worlds" interpretation, because it does not propose at all that at every quantum measurement event the universe splits into so many worlds as there are orthogonal states. This is beyond silly in fact (it was also not at all advocated by Everett), and the people who did coin these terms should be ashamed of themselves (but I won't name them here). My re-statement of Everett's theory in the modern language of quantum information theory can be read here, and in any case Zeh (in 1973) and Deutsch (in 1985) before me had understood much about Everett's theory without imagining some many-worlds voodoo. 

So let us indeed talk about a quantum state by writing it in terms of the basis states of the measurement device we are about to examine it with. Because that is all we can do, ever. Just as we have learned in the first six installments of this series, we will measure the quantum state using an ancilla A, with orthogonal basis states \(|i\rangle_A\). I wrote the 'A' as a subscript to distinguish it from the quantum states, but later I will drop the subscript once you are used to the notation. 

Now look what happens if I measure \(|Q\rangle=\sum_i\alpha_i |a_i\rangle\) with A (to distinguish the quantum states, written in terms of A's basis from the A Hilbert space, we simply write them as \(|a_i\rangle\)). The probability to observe the quantum state in state i is (you remember of course Part 4)
$$p_i=|\langle a_i|i\rangle_A|^2=|\alpha_i|^2.$$ 
Now get this: You're supposed to measure a random state, but the probability distribution you obtain is not random at all, but given by the probability distribution \(p_i\), which is not uniform. This makes no sense at all. If \(|Q\rangle\) was truly arbitrary, then on average you should see \(p_i=1/d\) (the uniform distribution), where d is the dimension of the Hilbert space. So an arbitrary unknown quantum state, written in terms of the basis states of the apparatus that we are going to measure it in, should be (and must be) written as
$$|Q\rangle=\sum_i^d\frac1{\sqrt d} |a_i\rangle.$$
Now, each outcome i is equally likely, as it should be if you are measuring a state that nobody prepared beforehand. A random state. With maximum entropy. 

So now we got this out of the way: We know how to write the to-be-measured state. Except that we assumed that the system Q had never interacted with anything (or was measured by anything) before. This also is a nonsense assumption. All quantum states are entangled: there is no such thing as a "pristine" quantum system. Fortunately, we know exactly how to describe that: we can write the quantum wavefunction so that it is entangled with an arbitrary "reference" state R:
$$|QR\rangle=\frac1{\sqrt{d}}\sum_i|a_i\rangle_Q|r_i\rangle_R$$
You can think of R as all the measurement devices that Q has interacted with in the past: who are we to say that A is really the first? Now we don't know really what all these R states are, so we just trace them out, so that the Q density matrix is the familiar
$$\rho_Q=\frac1d\sum_i |a_i\rangle\langle a_i|.$$
After we measured the state with A, the joint state QRA is now (the previous posts tell you how to do this)
$$|QRA\rangle=\frac1{\sqrt d}\sum_i |a_i\rangle|r_i\rangle_R|i\rangle_A.      (1)$$
Don't worry about the R system too much: the Q density matrix is still the same as above, and I have to skip the reason for that here. You can read about it in the paper. Oh yes, there is a paper. Read on.

This is, after all, the post about consecutive measurements, so we will measure Q again, but this time with ancilla B, which is not in the same basis as A. (If it was, then the result would be trivial: you'd just get the same result over and over again: it is like all the pieces of the measurement device A all agreeing on the result). 

So we will say that the B eigenstates are at an angle with the A eigenstates:
$$\langle b_j|a_i\rangle=U_{ij}$$
This just means that what is a zero or one in one of the measurement devices (if we are measuring qubits) is going to be a superposition in the other's basis. U is a unitary matrix. For qubits, a typical U will look like this: 
$$U=\begin{pmatrix} \cos(\theta) & -\sin(\theta)\\ \sin(\theta)& \cos(\theta)\\ \end{pmatrix}$$
where \(\theta\) is the angle between the bases. (Yes, it is a special case, but it will suffice.)

To measure Q with B (after we measured it with A, of course) we have to write Q in terms of B's eigenstates, and then measure. What you get is a wave function that has Q entangled not only with its past (R), but both A and B as well:
$$|QRAB\rangle=\frac1{\sqrt d}\sum_{ij}U_{ij}|b_j\rangle|i\rangle_R|i\rangle_A|j\rangle_b.       (2)$$
You might think that this looks crazy complicated, but the result is really quite simple. And it agrees with everything that has been written about consecutive measurements so far, whether they advocated a collapse picture or a unitary "relative state" picture. For example, the joint density matrix of just the two detectors, \(\rho_{AB}\), is just
$$\rho_{AB}=\frac1d\sum_i|i\rangle\langle i|\otimes\sum_j|U_{ij}|^2|j\rangle\langle j |.$$
That this is the "standard" result will dawn on you when you notice that  \(|U_{ij}|^2\) is the conditional probability to measure outcome j with B given that the previous measurement (with A) gave you outcome i (with probability \(1/d\), of course).

It is fair warning that if you have not understood this result, you should probably not go on reading. Go on if you must, but remember to go back to this result.

Also, keep in mind that I will from now on use the index i for the system A, the index j for system B, and later on I will use k for system C. And I won't continually indicate the state with a bothersome subscript like \(|i\rangle_A\). Because that is how I roll.

So here is what we have achieved. We have written the physics of consecutive quantum measurements performed on the same system in a manifestly unitary formalism, where wavefunctions do not collapse, and the joint wavefunction of the quantum system, entangled with all the measurements that have preceded our measurements, along with our recent attempts with A and B, exists in a superposition, will all the possibilities (realized or not) still present. And the resulting density matrix along with all the probabilities agree precisely with what has been known since Bohr, give or take.

And the whispers of "Chris, what other ways do you know of to waste your time, besides I mean, blogging?" are getting louder.

But wait. There is the measurement with C that I advertised. You might think (possibly with anybody who has ever contemplated this calculation) "Why would things change?" But they will. The third measurement will show a dramatic difference, and once we're done you'll know why.

First, we do the boring math. You could do this yourself (given that you followed enough to get to be able to derive Eqs. (1) and (2)). You just use a unitary \(U'\) to encode the angle between the measurement system C and the system B (just like U described the rotation between systems A and B), and the result (after tracing out the quantum system Q and the reference system R, since no one is looking at those) looks innocuous enough:
$$\rho_{ABC}=\frac1d\sum_i|i\rangle\langle i|\otimes\sum_{jj'}U_{ij}U^{*}_{ij'}|j\rangle\langle j'|\otimes \sum_k U^{'}_{jk}U^{'*}_{j'k}|k\rangle\langle k|.         (3)$$
Except after looking this formula over a couple of times, you squint. And then you go "Hold on, hold on".

"The B measurement!", you exhale. After measuring with B the device was diagonal in the measurement basis (this means that the density matrix was like \(|j\rangle \langle j|\)). But now you measured Q again, and now B is not diagonal anymore (now it's like \(|j\rangle \langle j'|\)). How is that possible?

Well, it is the law, is all I can tell you. Quantum mechanics requires it. Density matrices, after all, only tell us part of the story (since you are tracing out the entire history of measurements). That story could be full of lies, and here it turns out it actually is.

It is the last measurement that gives a density matrix that is diagonal in the measurement basis, always. Oh, and the first one, if you measure an arbitrary unknown state. That's two. To see that things can be different, you need a third. The one in between.

To see that Eq. (3) is nothing like what you are used to, let's see what a collapse picture would give you. A detailed calculation using the conventional formalism will lead to (the superscript "coll") is to remind you that this NOT the result of a unitary calculation

$$\rho_{ABC}^{{\rm coll}}=\frac1d\sum_i|i\rangle\langle i|\otimes \sum_j|U_{ij}|^2|j\rangle\langle j|\otimes \sum_k|U_{jk}^{'}|^2|k\rangle\langle k|.       (4)$$

The difference between (3) and (4) should be immediately obvious to you. You get (4) from (3) if you set \(j=j'\), that is, if you remove the off-diagonal terms that exist in (3). But, you see, there is no law of physics that allows you to just grab some off-diagonal terms and yank them out of the matrix. That means that (3) is a consequence of quantum mechanics, and (4) is not derived from anything. It is really just wishful thinking.

"So", I can hear you mutter from a distance, "can you make a measurement that supports one or the other of the approaches?  Can experiments tell the difference between the two ways to understand quantum measurement?"

That, Detective,  is the right question. 

How do we tell the difference between two density matrices? Let us focus on qubits here (\(d=2\)). And, just to make things more tangible, let's fix the angles between the consecutive measurements. 

Measurement A is the first measurement, so there is no angle. In fact, A sets the stage and all subsequent measurements will be relative to that. We will take B at 45 degrees to A. This means that B will have a 50/50 chance to record 0 or 1, no matter whether A registered 0 or 1. Note that A also will record 0 or 1 half the time, as it should since the initial state is random and unknown. 

We will take C to measure at an angle of 45 degrees to B also, so that C's entropy will be one bit as well. Thus, all three detector's entropy should be one bit.  This will be true, by the way, both in the unitary, and in the collapse picture. The relative states between the three detectors are, however, quite different between the two descriptions. Below you can see the quantum Venn diagram for the unitary picture on the left, and the collapse picture on the right. 
Quantum Entropy Venn diagram for the joint and relative state of three detectors A, B, and C. Detector B measures Q at an angle θ = π/4 relative to the basis of A, and C measures at θ = π/4 relative to the basis of B (from [2]).  

You can first convince yourself that the entropy of each detector is 1 bit in both pictures. You can further convince yourself that the pairwise entropy diagram between any two detectors (tracing out the third) is the same in both pictures. Ordinarily I would leave it to the reader to check this, but here is the result anyway: the pairwise diagram has the entries (1,0,1), meaning that no two detectors share any entropy. 

We kinda knew that had to be like that, on account of the \(\pi/4\) angles and all. Yes, the two diagrams look very different. For example, look at detector B. If I give you A and C, the state of B is perfectly known as \(S(B|AC)=0\). That's not true in the collapse picture: giving A or C does nothing for B. 

That in itself looks like a death knell for the unitary picture: How could it be that a past and a future experiment can fully determine the quantum state in the present? It turns out that such questions have been asked before! Aharonov, Bergmann, and Lebowitz (ABL) showed in 1964 that it is possible to set up a measurement so that knowing the results from A and C will allow you to predict with certainty what B would have recorded [1]. As you can tell from the title of their paper, ABL were concerned about the apparent asymmetry in quantum measurement. 

Of course there is an asymmetry! A measurement can tell you about the past, but it cannot tell you about the future! What an asymmetry! 

Slow down, there. That's not a fair comparison. Causality is, after all, ruling over us all: what hasn't happened is different from that which has happened.  The real question is whether, after all things are said and done, there is an asymmetry between what was, and what could have been. In the language of quantum measurement, we should instead ask the question: If the past measurements influence what I can record in the future, do the future measurements constrain what once was, in an equal manner? Or put in another way, can the measurements today tell me as much about the state on which it was performed, as knowing the state today tells you about future measurements? 

To some extent, ABL answered this question in the affirmative. For a fairly contrived measurement scenario, they showed that if you give me the measurement record of the past, as well as what was measured in the future, I can tell you what it is you must have measured in the present. In other words, they said that the past and future, taken together, will predict the present perfectly.

I don't think everybody who read that paper in 1964 was aware of the ramifications of this discovery. I don't think people are now. What we show in our paper is that what ABL showed holds in a fairly contrived situation, in fact holds true universally, all the time. 

"Which paper?", you ask. "Come clean already!"

Can't you wait just a little longer? I promise it will be at the end of the blog. You can scroll ahead if you must. 

In fact, we show that the ABL result is just a special case that holds quite generally. For any sequence of measurements of the same quantum system, Jennifer Glick and I prove that only the very first and the very last measurements are uncertain. All those measurements in between are perfectly predictable. (This holds for the case of measuring unprepared quantum states only.) This makes sense from the point of view I just advocated: you cannot fully know the last measurement because the future did not yet happen. And you cannot know the first measurement because there is nothing in its past. Everything else is perfectly knowable. 

Now, "knowable" does not mean "known", because in general you cannot use the results of the individual measurements to make the predictions about the intermediate detectors: you need some of the off-diagonal terms of the density matrix, which means that you have to perform more complex, joint measurements. But you only need the measurement devices, nothing else. 

We show a number of other fairly uncommon things for sequences of quantum measurements in the paper entitled "Markovian and non-Markovian quantum measurements", which you can read here (an earlier version, called "Quantum mechanics of consecutive measurements", is still on arXiv here). For example, we show that the sequence of measurements does not form a Markov chain, as is expected for a collapse picture. We also show that the density matrix of any pair of detectors in that sequential chain is "classical", which we here identify with "diagonal in the detector product basis". There are several more general results in there: it turned into a fairly long paper.


"So your math says that wavefunctions don't collapse. Can you prove it experimentally?"

That too is an excellent question. Math, after all, is just a surrogate that helps us understand the laws of nature. What we are saying is that the laws of nature are not as you thought they were. And if you make a statement like that, then it should be falsifiable. If your theory truly goes beyond the accepted canon, then there must be an experiment that will support the new theory (it cannot prove it, mind you) by sending the old theory to where.... old theories go to die. 

What is that experiment? It turns out it is not an easy experiment. Or, at least, for this particular scenario (three consecutive measurements of the same quantum system) the experiment is not easy. The statistics of counts of the three measurement devices are predicted by the diagonal of the joint density matrix \(\rho_{ABC}\), and this is the same in the unitary relative state picture and the collapse picture. The difference is in off-diagonal elements of the density matrix. Now, there are methods that allow you to measure off-diagonal elements of a quantum state, using so-called "quantum-state tomography" methods. Because the density matrix in question is large (an 8x8 matrix for qubit measurements), this is a very involved measurement. Fortunately, there are short cuts. It turns out that for the case at hand, every single moment of the density matrix is different. The nth moments of a density matrix is defined by \({\rm Tr} \rho^n\), and it turns out that already the second moment, that is \({\rm Tr \rho^2}\), is different. Measuring the second moment of the density matrix is far simpler than measuring the entire matrix via quantum state tomography, but given that it is a three qubit system, it is still not a simple endeavor. But it is one that I hope someone will be convinced will be worth undertaking (but see the post scriptum at the end  of this blog). Because it will be the experiment that will send the Copenhagen interpretation packing, for all time.

So I asked myself, "How do I close such a long series about quantum measurement, and this interminable last post?" I hope to have brought quantum measurement a little bit out of the obscure corner where it is sometimes relegated to. Much about quantum measurement can be readily understood, and what mysteries there still are can, I am confident, be resolved as well. Collapse never made any physical sense to begin with, but neither did a branching of the universe. We know that quantum mechanics is unitary, and we now know that the chain of measurements is too. What remains to be solved, really, is just to figure out where the randomness that we experience in the last measurement comes from, when the future is still uncertain.

Where does this randomness come from? What do these probabilities mean?  I have some ideas about that, but this will have to wait for another blog post. Or series.

Part 8 of this never-ending series is here. You might also be interested in a post that is outside of the series, but could very well be a part of it. I describe the mysterious delayed-choice quantum eraser that sheds light on the "are quanta particles or waves?" issue, here.


[1] Y. Aharonov, P. G. Bergmann and J. L. Lebowitz, “Time symmetry in the quantum process of measurement,” Phys. Rev. B 134, 1410–16 (1964).

[2] J. R. Glick and C. Adami, "Markovian and Non-Markovian Quantum Measurements", Found. Phys. 50 (2020) 1008-1055.


P.S. (2021). A direct measurement of the three-measurement density matrix shown in the Venn diagram in the figure above has been carried out using quantum optics at the University of Ottawa, ruling out the collapse-picture by a wide margin. These results are being written up. Stay tuned.