Eqs

Tuesday, July 14, 2015

On quantum measurement (Part 6: The quantum eraser)

Here's to you, quantum measurement afficionado, who has found their way to the sixth installment, breathless (I hope), to learn of the fate of the famous cat, eponymous with one of the great ones of quantum mechanics. Does she live or die? Can she be both dead and alive? What did this kitten ever do to deserve such an ambiguous fate? And, how many times can you write a blog post teasing cat-revelations and still not talk about it after all? And talk about erasers instead? How many?

In my defense, I tried to talk about the cat in this post. I really did. But the quantum eraser, and in particular the double-slit experiment that I have to describe first, took so much room that it was just not in the cards. But rejoice then, this series will be even longer!

<Silence>

OK then. I think we are beyond the obligatory summary of previous posts now. This can get tedious very fast. ("In Part 27, we learned that,...").  Instead, I refer those who stumbled on this, to peruse the first post here, which should be enough to get you on the road. 

So where were we? In the last post, I ended with teasing you with the quantum eraser. This post will all be about the quantum description of these seemingly paradoxical situations: the two-slit experiment, and the quantum eraser. Post 7 will be thus be the one concerned with Felis Schröderingii, and it is Part 8 that will deal with even more obscure paradoxes such as the Zeno and Anti-Zeno effect. But that post will use these paradoxes as a ploy to investigate something altogether more important: namely the legacy of Hans Bethe's insight from the very first post of the series. 

Foreshadowing aside, let's take a look at the infamous double-slit experiment (also know as "Young's experiment"). Richard Feynman (in his lectures) famously remarked that the experiment is "a phenomenon which is impossible […] to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery.

Really, the only mystery? 

Well, once I get through all this (meaning this and the following posts), you might agree that perhaps this is not so far off the mark. If I can get you to slowly and knowingly nod about this statement (while thinking thet there is also much more to QM than that), then I did my job.

As is my want, I will be borrowing images from Wikimedia to illustrate stuff. (Because they explicitly encourage that sort of thing). Here, for example, is what they would show about the double-slit experiment:
Fig. 1: The double-slit experiment (source: Wikimedia)

Here's what happens in this experiment. You shoot electrons at a screen that has, yes you guessed it, two slits. It is important that these are electrons, not photons. (Actually it is not but you would not understand why I say this just yet. So let's forget I even uttered this). You shoot particles at a screen. Particles! (Imagine this in the voice of Seinfeld's Seinfeld). There is a screen behind the double slit. It records where the electrons that got through the slits land. What would you expect to find there?

Maybe something like this?
Fig. 2: The pattern expected at the screen if classical particles are sent in from the left.


If the world was classical, yes this is what you would see. But the world is not classical, and this is not what we observe. What you get (and what you are looking at below is data from an actual such experiment) is this:
Fig. 3: The intensity pattern on the screen if quantum particles are sent in. (Source: Wikimedia)
Now, don't focus on the fact that you have a big blob in the middle, and two smaller blobs left and right. I don't have time (and neither do you) to explain those. Focus on the blob in the middle instead. You thought you were going to see two intense blobs of light, one behind each slit. But you don't. You think something is wrong with your apparatus. But try as you might, you get this pattern every time.

What you should focus on are the dark lines that interrupt the blob. Where on earth do they come from? They look suspiciously like interference patterns, as if you had shone some light on the double slit, like so:
Fig. 3: Interference patterns from light shining on a double slit.
But you did not shine light on these slits. You sent particles. Particles! A particle cannot interfere with itself! Can it?

Here is where we pause. You certainly have to hand it to Feynman. It seems these electrons are not your ordinary classical particles.

The pattern on the screen does not tell you that an electron "went through one or the other slit". The electron's wavefunction interacts with the slits, and then the entangled wavefunction interacts with the screen, and then you look at the screen (not the quantum state, as you remember). Lets find out how that works, using the formalism that we just learned. (Of course you saw this coming, didn't you?)

Say you have your electron, poised to be sent onto the double slit. It is described by a wave function $\Psi(x)$. All your standard quantum stuff applies to this wavefunction by the way, such as an uncertainty relation for position and momentum.  I could have written down the wavefunction of a wave packet traveling with momentum $p$ just as well. None of these complications matter for what I'm doing here. 

When this wavefunction crosses the double slit, it becomes a superposition of having gone through the slit $L$ and through the slit $R$, and I can write it like this (see Figure below).
                             $|\Psi_1\rangle=\frac1{\sqrt 2}\left(|\Psi_L\rangle+|\Psi_R\rangle\right) \ \ \ \  (1)$ 
I have written $|\Psi_1\rangle$ in terms of the "bra-ket" notation to remind you that the wavefunction is a vector, and I have identified only the position (because this is what we will measure later).
Fig. 4: Sketch of the double-slit experiment in terms of wavefunctions
Of course, you are free to roll your eyes and say "How could the electron possibly be in a superposition of having gone through the left slit and the right slit at the same time?" But that is precisely what you have to assume in order to get the mathematics to agree with the experimental result, which is the intensity pattern on the right in the figure above. 

What happens when we re-unite the two branches of the wavefunction at the spot where I write $\Psi_2$ in the figure above and then measure the space-component of the wavefunction? We learned in Parts 1-5 how to make that measurement, so let's get cracking. 

I spare you the part where I introduce the measurement device, entangle it with the quantum system, and then trace over the quantum system, because we already know what this gives rise to:  Born's rule, which says that the likelihood to get a result at $x$ is equal to the square of the wavefunction

                                                  $P(x)=| \langle x|\Psi_2\rangle|^2=|\Psi_2(x)|^2  \ \ \ \  \ \   (2)$

There, that's how simple it is. I remind you here that you can use the square of the wavefunction instead of the square of the measurement device's likelihood because they are one and the same. The quantum entanglement tells you so. The Venn diagram tells you so. If this is not immediately obvious to you, I suggest a revisiting of Parts 4 & 5.

Let's calculate it then. Plugging (1) into (2) we get

            $P(x)=\frac12\left(|\Psi_L(x)|^2+|\Psi_R(x)|^2+ 2 {\rm Re}[\Psi_L^\star(x)\Psi_R(x)]\right) \ \ \ \ (3)$
The two first terms in (3) do what you are used to in classical physics: they make a smudge on the screen at the locations of the two slits, $x=L$ and $x=R$. The interesting part is the third term, which is given by the real part of the product of the complex conjugate of $\Psi_L(x)$ with $\Psi_R(x)$. That's what the math says. And if you write down what these things are, you will find that these are the parts that create the "fringes", namely the interference pattern between $\Psi_L$ and $\Psi_R$. That's because that cross term can become negative, while the first two terms must be positive. If you did not have a wavefunction split just like I showed in (1), then you would not get that cross term, and you would not be able to understand the experiment. And hence you would not understand how the world works. 

"But, but.." I can almost here some of you object, "surely I can find out through which of the slits the electron actually went, no?"

Indeed you could try. Let's see what happens if you do. One way to do this is to put a device into the path of the electron that flips its spin (its helicity) in one of the branches (say the left, $L$), but not in the right. Then all I had to do is measure the helicity to know through which it went, right? But how would you explain, then, the interference pattern?

Well, let's try this (and ignore for a moment that this experiment is not at all easy to do with electrons (but it is very easy to do with photons, and using polarization instead of helicity). So now the wavefunction has another tag, which I will call $u$ (for "up" so I don't have to type $\uparrow$ all the time), and $d$.

After the slits, the wave funtion becomes
                     $|\Psi_1\rangle=\frac1{\sqrt2}(|\Psi_L\rangle|u\rangle+|\Psi_R\rangle|u\rangle) \ \ \ \ (4)$
The new identifier ("quantum number") is in a product state with respect to all the other quantum numbers. And if nothing else happened, these quantum numbers would not affect the interference patters. (This is why I was able to ignore the momentum variable above, for example: it is in a product state). But now let's put the helicity flipper into the left pathway, like in the figure below:
Fig. 6: Double-slit interference experiments with location variable tagged by helicity
We then get the entangled state:

         $|\Psi\rangle=\frac1{\sqrt2}\left(|\Psi_L\rangle|d\rangle+|\Psi_R\rangle |u\rangle\right)\ \ \ (5)$

All right, you managed to tag the location variable with helicity. Now we could measure the helicity to find out which slit the electron went through, right? But before we do that, let's take a look at the interference pattern, by calculating \(P(x)\) as before. Rather than (3), we now get
 \(\frac12\left(|\Psi_L(x)|^2\langle d|d\rangle +|\Psi_R(x)|^2\langle u|u\rangle
+ \Psi_L^\star(x)\Psi_R(x)\langle d|u\rangle  + \Psi_R^\star(x)\Psi_L(x)\langle u|d\rangle\right)\)  

Now, the helicity states $|u\rangle$ and $|d\rangle|$ are orthogonal, of course. What that means is that $\langle d|d\rangle=\langle u|u \rangle=1$ while $\langle d|u\rangle=\langle u|d\rangle=0$, and $P(x)$ simply becomes
            $P(x)=\frac12\left(|\Psi_L(x)|^2+|\Psi_R(x)|^2\right) \ \ \ \ (6)$
and the interference term--is gone. What you get is the pattern that you see on the right hand side of the figure above: no fringes, just two peaks (one from the left slit, one from the right).

You note of course that we didn't even need to measure the helicity in order to destroy the pattern. Just making it possible to do so was sufficient.

But guess what. We can undo this attempt at a measurement. Yes, we can "unlook" the location of the electron, and get the interference pattern back. That's what the quantum eraser does, and you'd be surprised how easy it is.

The trick here is to insert a filter just before measuring the re-united wavefunction. This filter measures the wavefunction not in the orthogonal basis $(u,d)$ but rather in a basis that is rotated by 45$^\circ$ with respect to the $(u,d)$ system. The rotated basis states are 
 $|U\rangle=\frac1{\sqrt2}(|u\rangle+|d\rangle)$ and $|D\rangle=\frac1{\sqrt2}(|u\rangle-|d\rangle)$. Detectors that measure in such a rotated (or "diagonal") basis are easy to make for photon polarizations, and quite a bit harder for spins, but let that be an experimental quibble.

If we put such a detector just before the screen (as in Fig. 7 below), the wavefunction becomes
$\frac1{\sqrt2}\left(|\Psi_L\rangle|d\rangle+|\Psi_R\rangle |u\rangle\right)\rightarrow \frac1{2}\left(|\Psi_L\rangle|+|\Psi_R\rangle\right)\frac1{\sqrt2}(|u\rangle+|d\rangle)\ \ \ (7)$,
that is, the space wavefunction and the spin wavefunction are disentangled; they are back to being a product. You lose half your initial amplitude, yes. (See the factor 1/2 in (7)?)

How do you show that this is right? It's a simple application of the quantum measurement rules I showed you in Parts 1-5. To measure in the "diagonal" basis, you start with a measurement ancilla in one of the basis states (say $|D\rangle$), then write the $L$ and $R$ wavefunction in that basis, and then apply the measurement operators, which as you remember is the "attempted cloning" operator. Then finally you measure $D$ (that is, you calculate $\langle D|\Psi\rangle$). I could show these three lines, or I could let you try them yourself. 

I think I'll let you try it yourself, because you should really see how the disentanglement happens. Also, I'm really tired right now :-)

That this really could work was proposed first by Marlan Scully in the now classic experiment [1] (see also [2]). The experiment was carried out successfully multiple times, notably the experiment in [3]. You might object that they used photons and not electrons there. But the important point is the erasure of the which-path information provided by the "helicity-flipper" (which is a "polarization-rotator" if you're using photons), and that certainly does not depend on whether you are massive or not. Because you know, neither the photon nor the electron are particles. Because there is no such thing as a particle. There is only the illusion of particles generated by measurement devices that go 'click'. But if you have read this far, then you already know not to trust the measurement devices: they lie. 

And the best illustration of these lies is perhaps Schrödinger's cat. You will get to play with the equations describing her, I promise. And maybe, just maybe, you will also come to appreciate that quantum reality, and the reality we perceive via our classical measurement devices, are two very different things. 


[1] M.O. Scully and K. Drühl, Quantum eraser – a proposed photon-correlation experiment concerning observation and delayed choice in quantum mechanics, Phys. Rev. A 25, 2208 (1982).

[2] M.O. Scully, B.-G. Englert, H. Walther, Quantum optical tests of complementarity, Nature 351, 111 (1991).

[3]  S. P. Walborn, M. O. Terra Cunha, S. Padua, Double-slit quantum eraser. Phys. Rev. A 65, 033818 (2002).

Saturday, May 23, 2015

What happens to an evaporating black hole?

For years now I have written about the quantum physics of black holes, and each and every time I have pushed a single idea: that if black holes behave as (almost perfect) black bodies, then they should be described by the same laws as black bodies are. And that means that besides the obvious processes of absorption and reflection, there is the quantum process of spontaneous emission (discovered to occur in black holes by Hawking), and this other process, called stimulated emission (neglected by Hawking, but discovered by Einstein). The latter solves the problem of what happens to information that falls into the black hole, because stimulated emission makes sure that a copy of that information is always available outside of the black hole horizon (the stories are a bit different for classical vs. quantum information. These stories are told in a series of posts on this blog:

Oh these rascally black holes (Part I)
Oh these rascally black holes (Part II)
Oh these rascally black holes (Part III)
Black holes and the fate of quantum information
The quantum cloning wars revisited 

I barely ever thought about what happens to a black hole if nothing is falling in it. We all know (I mean, we have been told) that the black hole is evaporating. Slowly, but surely. Thermodynamic calculations can tell you how fast this evaporation process is: the rate of mass loss is inversely proportional to the square of the black hole mass. 

But there is no calculation of the entropy (and hence the mass) of the black hole as a function of time!

Actually, I should not have said that. There are plenty of calculations of this sort. There is the CGHS Model, the JT Model, and several others. But these are models of quantum gravity in which the scalar field of standard curved space quantum field theory (CSQFT, the theory developed by Hawking and others to understand Hawking radiation) is coupled in one way or the other to another field (often the dilaton).You cannot calculate how the black hole loses its mass in standard CSQFT, because that theory is a free field theory! Those quantum fields interact with nothing! 

The way you recover the Hawking effect in a free field theory is you consider not a mapping of the vacuum from time $t=0$ to a finite time $t$, you map from past infinity to future infinity. So time disappears in CSQFT! Wouldn't it be nice if we had a theory that in some limit just becomes CSQFT, but allows us to explicitly couple the black hole degrees of freedom to the radiation degrees of freedom, so that we could do a time-dependent calculation of the S-matrix? 

Well this post serves to announce that we may have found such a theory ("we" is my colleague Kamil Brádler and I). The link to the arXiv article will be below, but before you sneak a peek let me first put you in the right mind to appreciate what we have done.

In general, when you want to understand how a quantum state evolves forward in time, from time $t_1$ to time $t_2$, say, you write
$$|\Psi(t_2)\rangle=U(t_1,t_2)|\Psi(t_1)\rangle\ \ \     (1)$$
where $U$ is the unitary time evolution operator
$$U(t_2,t_1)=Te^{-i\int_{t_1}^{t_2}H(t')dt'}\ \ \      (2)$$
The $H$ is of course the interaction Hamiltonian, which describes the interaction between quantum fields. The $T$ is Dyson's time-ordering operator, and assures that products of operators always appear in the right temporal sequence. But the interaction Hamiltonian $H$ does not exist in free-field CSQFT.

In my previous papers with Bradler and with Ver Steeg, I hinted at something, though. There we write this mapping from past to future infinity in terms of a Hamiltonian (oh, the wrath that this incurred from staunch relativists!) like so:
$$|\Psi_{\rm out}\rangle=e^{-iH}|\Psi_{\rm in}\rangle\ \ \     (3)$$
where $|\Psi_{\rm in}\rangle$ is the quantum state at past infinity, and $|\Psi_{\rm out}\rangle$ is at future infinity. This mapping really connects creation and annihilation operators via a Bogoliubov transformation
$$A_k=e^{-iH}a_ke^{iH}\ \ \ (4)$$
where the $a_k$ are defined on the past null infinity time slice, and the $A_k$ at future null infinity, but writing it as (3) makes it almost look as if $H$ is a Hamiltonian, doesn't it? Except there is no $t$. The same $H$ is in fact used in quantum optics a lot, and describes squeezing. I added to this a term that allows for scattering of radiation modes on the horizon in the 2014 article with Ver Steeg, and that can be seen as a beam spliter in quantum optics. But it is not an interaction operator between black holes and radiation. 

For the longest time, I didn't know how to make time evolution possible for black holes, because I did not know how to write the interaction. Then I became aware of a paper by Paul Alsing from the Air Force Research Laboratory, who had read my paper on the classical capacity of the quantum black hole channel, repeated all of my calculations (!), and realized that there exists, in quantum optics, an extension to the Hamiltonian that explicitly quantizes the black hole modes! (Paul's background is quantum optics, so he was perfectly positioned to realize this.)

Because you see, the CSQFT that everybody is using since Hawking is really a semi-classical approximation to quantum gravity, where the black hole "field" is static. It is not quantized, and it does not change. It is a background field. That's why the black hole mass and entropy change cannot be calculated. There is no back-reaction from the Hawking radiation (or the stimulated radiation for that matter), on the black hole. In the parlance of quantum optics, this approximation is called the "undepletable pump"  scenario. What pump, you ask?

In quantum optics, "pumps" are used to create excited states of atoms. You can't have lasers, for example, without a pump that creates and re-creates the inversion necessary for lasing. The squeezing operation that I talked about above is, in quantum optics, performed via parametric downconversion, where a nonlinear crystal is used to split photons into pairs like so:
Fig. 1: Spontaneous downconversion of a pump beam into a "signal" and an "idler" beam. Source: Wikimedia
Splitting photons? How is that possible? Well it is possible because of stimulated emission! Basically, you are seeing the quantum copy machine at work here, and this quantum copy machine is "as good as it gets" (not perfect, in other words, because you remember of course that perfect quantum copying is impossible). So now you see why there is such a tantalizing equivalence between black holes and quantum optics: the mathematics describing spontaneous downconversion and black hole physics is the same: eqs (3) and (4). 

But these equations do not quantize the pump, it is "undepleted" and remains so. This means that in this description, the pump beam is maintained at the same intensity. But quantum opticians have learned how to quantize the pump mode as well! This is done using the so-called "tri-linear Hamiltonian": it has quantum fields not just for the signal and idler modes (think of these as the radiation behind and in front of the horizon), but for the pump mode as well. Basically, you start out with the pump in a mode with lots of photons in, and as they get down-converted the pump slowly depletes, until nothing is left. This will be the model of black hole evaporation, and this is precisely the approach that Alsing took, in a paper that appeared in the journal "Cassical and Quantum Gravity" last year. 

"So Alsing solved it all", you are thinking, "why then this blog post?" 

Not so fast. Alsing brought us on the right track, to be sure, but his calculation of the quantum black hole entropy as a function of time displayed some weird features. The entropy appeared to oscillate rather than slowy decrease. What was going on here?

For you to appreciate what comes now, I need to write down the trilinear Hamiltonian:
$$H_{\rm tri}=r(ab^\dagger c^\dagger-a^\dagger bc)\ \ \ (5) $$.
Here, the modes $b$ and $c$ are associated with radiation degrees in front of and behind the horizon, whereas $a$ is the annihilation operator for black hole modes (the "pump" modes). Here's a pic so that you can keep track of these.
Fig. 2: Black hole and radiation modes $b$ and $c$.
In the semi-classical approximation, the $a$ modes are replaced with their background-field expectation value, which morphs $H_{\rm tri}$ into $H$ in eqs. (3) and (4), so that's wonderful: the trilinear Hamiltonian turns into the Hermitian operator implementing Hawking's Bogoliubov transformation in the semi-classical limit. 

But how you do you use $H_{\rm tri}$ to calculate the S-matrix I wrote down long ago, at the very beginning of this blog post? One thing you could do is to simply say, 
$$U_{\rm tri}=e^{iH_{\rm tri}t}\ ,$$
and then the role of time is akin to a linearly increasing coefficient $r$ in eq. (5). That's essentially what Alsing did (and Nation and Blencowe before him, see also Paul Nation's blog post about it) but that, it turns out, is only a rough approximation of the true dynamics, and does not give you the correct result, as we will see. 

Suppose you calculate $|\Psi_{\rm out}\rangle=e^{-iH_{\rm tri}t}|\Psi_{\rm in}\rangle$, and using the density matrix $\rho_{\rm out}=|\Psi_{\rm out}\rangle \langle \Psi_{\rm out}|$ you calculate the von Neumann entropy of the black hole modes as
$$S_{\rm bh}=-{\rm Tr} \rho_{\rm out}\log \rho_{\rm out}\ \ \ (6)$$
Note that this entropy is exactly equal to the entropy of the radiation modes $b$ together with $c$, as the initial black hole is in a pure state with zero entropy. 

How can a black hole that starts with zero entropy lose entropy, you ask? 

That's a good question. We begin at $t=0$ with a black hole in a defined state of $n$ modes (the state $|\Psi_{\rm in}\rangle=|n\rangle$) for convenience of calculation. We could instead start in a mixed state, but the results would not be qualitatively different after the black hole has evolved for some time, yet the calculation would be much harder. Indeed, after interacting with the radiation the black hole modes become mixed anyway, and so you should expect the entropy to start rising from zero quickly at first, and only after it approached its maximum value would it decay. That is a behavior that black hole folks are fairly used to, as a calculation performed by Don Page in 1993 shows essentially (but not exactly) this behavior. 

Page constructed an entirely abstract quantum information-theoretic scenario: suppose you have a pure bi-partite state (like we start out with here, where the black hole is one half of the bi-partitite state and the radiation field $bc$ is the other), and let the two systems interact via random unitaries. Basically he asked: "What is the entropy of a subsystem if the joint system was in a random state?" The answer, as a function of the (log of the) size of the dimension of the radiation subsystem is shown here:
Fig. 3: Page curve (from [1]) showing first the increase in entanglement entropy of the black hole, and then a decrease back to zero. 
People usually assume that the dimension of the radation subsystem (dubbed by Page the "thermodynamic entropy" (as opposed to the entanglement entropy) is just a proxy for time, so that what you see in this "Page curve" is how at first the entropy of the black hole increases with time, then turns around at the "Page time", until it vanishes.

This calculation (which has zero black hole physics in it) turned out to be extremely useful, as it showed that the amount of information from the black hole (defined as the maximum entropy minus the entanglement entropy) may take a long time to come out (namely at least the Page time), and it would be essentially impossible to determine from the radiation field that the joint field is actually in a pure state. But as I said, there is no black hole physics in it, as the random unitaries used in that calculation were, well, random.  

Say you use the $U_{\rm tri}$ instead for the interaction? This is essentially the calculation that Alsing did, and it turns out to be fairly laborious, because as opposed to the bi-linear Hamiltonian that can be solved analyically, you can't do that with $U_{\rm tri}$. Instead, you have to either expand $H_{\rm tri}$ in $rt$ (that really only works for very short times) or use other methods. Alsing used an approximate partial differential equation approach for the quantum amplitude $c_n(t)=\langle n|e^{-iH_{\rm tri}t}|\Psi_{\rm in}\rangle$. The result shows the increase of the black hole entropy with time as expected, and then indeed a decrease:
Fig. 4: Black hole entropy using (6) for $n$=16 as a function of $t$
Actually, the figure above is not from Alsing (but very similar to his), but rather is one that Kamil Brádler made, but using a very different method. Brádler figured out a method to calculate the action of $U_{\rm tri}$ on a vacuum state using a sophisticated combinatorial approach involving something called a "Dyck path". You can find this work here. It reproduces the short-time result above, but allows him to go much further out in time, as shown here:
Fig. 5: Black hole entropy as in Fig. 4, at longer times. 
The calculations shown here are fairly intensive numerical affairs, as in order to get converging results, up to 500 terms in the Taylor expansion have to be summed. This result suggests that the black hole entropy is not monotonically decreasing, but rather is oscillating, as if the black hole was absorbing modes from the surrounding radiation, then losing them again. However, this is extremely unlikely physically, as the above calculation is performed in the limit of perfectly reflecting black holes. But as we will see shortly, this calculation does not capture the correct physics to begin with. 

What is wrong with this calculation? Let us go back to the beginning of this post, the time evolution of the quantum state in eqs. (1,2).  The evolution operator $U(t_2,t_1)=Te^{-i\int_{t_1}^{t_2}H(t')dt'}$ is applied to the initial state gives rise to an integral over the state space: a path integral. How did that get replaced by just $e^{-iHt}$? 

We can start by discretizing the integral into a sum, so that $\int_0^t H(t')dt'\approx\sum_{i=0}^NH(t_i)\Delta t$, where $\Delta t$ is small, and $N\Delta t=t$. And because that sum is in the exponent, $U$ actually turns into a product:
$$U(0,t)\approx \Pi_{i=0}^N e^{-i\Delta t H(t_i)}\ \ \ (7)$$
Because of the discretization, each Hamiltonian $H(t_i)$ acts on a different Hilbert space, and the ground state that $U$ acts on now takes the form of a product state of time slices
$$|0\rangle_{bc}=|0\rangle_1|0\rangle_2\times...\times |0\rangle_N$$
And because of the time-ordering operator, we are sure that the different terms of $U(0,t)$ are applied in the right temporal order. If all this seems strange and foreign to you, let me assure you that this is a completely standard approximation of the path integral in quantum many-body physics. In my days as a nuclear theorist, that was how we calculated expectation values in the shell model describing heavy nuclei. I even blogged about this approach (the Monte Carlo Path Integral approach) in the post about nifty paper that nobody is reading. (Incidentally, nobody is reading those posts either).  

And now you can see why Alsing's calculation (and Bradler's initial recalculation of the same quantity with very different methods, confirming Alsing's result) was wrong: it represents an approximation of (7) using a single time-slice only ($N$=1). This approximation has a name in quantum many-body physics, it is called the "Static Path Approximation" (SPA). The SPA can be accurate in some cases, but it is generally only expected to be good at small times. At larger times, it ignores the self-consistent temporal fluctuations that the full path integral describes.

So now you know what we did, of course: we calculated the path integral of the S-matrix of the black hole interacting with the radiation field using many many time slices. Kamil was able to do several thousand time slices, just to make sure that the integral converges. And the result looks very different from the SPA. Take a look at the figure below, where we calculated the black hole entropy as a function of the number of time slices (which is our discretized time)
Fig. 6: Black hole entropy as a function of time, for three different initial number of modes. Orange: $n$=5, Red: $n$=20, Pink: $n$=50. Note that the logarithm is taken to the base $n+1$, to fit all three curves on the same plot. Of course the $n=50$ entropy is much larger than the $n=5$ entropy. $\Delta t=1/15$. 
This plot shows that the entropy quickly increase as the pure state decoheres, and then starts to drop because of evaporation. Obviously, if we would start with a mixed state rather than a pure state, the entropy would just drop. The rapid increase at early times is just a reflection of our short-cut to start with a pure state. It doesn't look exactly like Page's curves, but we cannot expect that as our $x$-axis is indeed time, while Page's was thermodynamic entropy (which is expected to be linear in time). Note that Kamil repeated the calculation using an even smaller $\Delta t=1/25$, and the results do not change.

I want to throw out some caution here. The tri-linear Hamiltonian is not derived from first principles (that is, from a quantum theory of gravity). It is a "guess" at what the interaction term between quantized black hole modes and radiation modes might look like. The guess is good enough that it reproduces standard CSQFT in the semi-classical limit, but it is still a guess. But it is also very satisfying that such a guess allows you to perfrom a straightforward calculation of black hole entropy as a function of time, showing that the entropy can actually get back out. One of the big paradoxes of black hole physics was always that as the black hole mass shrunk, all calculations implied that the entanglement entropy steadily increases and never turns over as in Page's calculation. This was not a tenable situation for a number of physical reasons (and this is such a long post that I will spare you these). We have now provided a way in which this can happen. 

So now you have seen with your own eyes what may happen to a black hole as it evaporates. The entropy can indeed decrease, and within a simple "extended Hawking theory", all of it gets out. This entropy is not information mind you, as there is no information in a black hole unless you throw some in it (see my series "What is Information?" if this is cryptic for you). But Steve Giddings convinced me (on the beach at Vieques no less, see photo below) that solving the infomation paradox was not enough: you've got to solve the entropy paradox also. 

A quantum gravity session at the beach in Vieques, Puerto Rico (January 2014). Steve Giddings is in sunglasses watching me explain stimulated emission in black holes. 
I should also note that there is a lesson in this calculation for the firewall folks (who were quite vocal at the Vieques meeting). Because the entanglement between the black hole and radiation involves three entities rather than two, monogamy of entanglement can never be violated, so this argument provides another (I have shown you two others in earlier posts) arguments against those silly firewalls.

The paper describing these results is on arXiv:

K. Brádler and C. Adami: One-shot decoupling and Page curves from a dynamical model for black hole evaporation

[1] Don Page. Average entropy of a subystem. Phys. Rev. Lett. 71 (1993) 1291.



Friday, December 5, 2014

Life and Information

I wrote a blog post about "information-theoretic considerations concerning life and its origins" for PBS's Blog "The Nature of Reality", but as they own the copyright to that piece, I cannot reproduce it here. You are free to follow the link, though: "Living Bits: Information and the Origin of Life".

I'm not complaining: the contract I signed clearly spelled out ownership. I can ask for permission to reproduce it, and I may. 

The piece is based in part on an article that is currently in review. You can find the arxiv version here, and some other bloggers comments here and here

Creationists also had something to say about that article, but I won't link any of it here. After all, this is a serious blog. 

Sunday, November 9, 2014

On quantum measurement (Part 5: Quantum Venn diagrams)

Here's what you missed, in case you have stumbled into this series midway. As you are wont to do, of course.

Part 1 had me reminiscing about how I got interested in the quantum measurement problem, even though my Ph.D. was in theoretical nuclear physics, not "foundational" stuff, and introduced the incomparable Hans Bethe, who put my colleague Nicolas Cerf and I on the scent of the problem.

Part 2 provides a little bit of historical background. After all, a bunch of people have thought about quantum measurement, and they are all rightfully famous: Bohr, Einstein, von Neumann. Two of those three are also heros of mine. Two, not three.

Part 3 starts out with the math of classical measurement, and then goes on to show that quantum mechanics can't do anything like that, because no-cloning. Really: the no-cloning theorem ruins quantum measurement. Read about it if you don't believe me. 

Part 4 goes further. In that part you learn that measuring something in quantum physics means not looking at the quantum system, and that classical measurement devices are, in truth,  really, really large quantum measurement devices, whose measurement basis is statistically orthogonal to the quantum system (on account of them being very high-dimensional). But that you should still respect their quantumness, which Bohr did not.  

Sometimes I wonder how our undertanding of quantum physics would be if Bohr had never lived. Well, come to think of it, perhaps I would not be writing this, as Bohr actually gave Gerry Brown his first faculty position at the NORDITA in Copenhagen, in 1960 (yes, before I was even born).  And it was in Gerry's group where I got my Ph.D., which led to everything else. So, if Niels Bohr had never lived, we would all understand quantum mechanics a little better, and this blog would not only never have been written, but also be altogether unnecessary? So, when I wonder about such things, clearly I am wasting everybody's time.

All right, let's get back to the problem at hand. I showed you how Born's rule emerges from not looking. Not looking at the quantum system, that is, which of course you never do because you are so focused on your classical measurement device (that you fervently hope will reveal to you the quantum truth). And said measurement device then proceeds to lie to you by not revealing the quantum truth, because it can't. Let's study this mathematically.  

First, I will change a bit the description of the measurement process from what I showed you in the previous post, where a quantum system (to be measured) was entangled with another measurement device (which intrinsically is also a quantum system). One of the two (system and measurement device) has a special role (namely we are going to look at it, because it looks classical). Rather than describing that measurement device by $10^{23}$ qubits measuring that lonely quantum bit, I'm going to describe the measurement device by two bits. I'm doing that so that I can monitor the consistency of the measurement device: after all, each and every fibre of the measurement device should confidently tell the same story, so all individual bits that make up the measurement device should agree. And if I show you only two of the bits and their correlation, that's because it is simpler than showing all $10^{23}$, even though the calculation including all the others would be exactly the same. 

All right, let's do the measurement with three systems: the quantum system $Q$, and the measurement devices (aka ancillas, see previous posts for explanation of that terminology) $A_1$ and $A_2$. 

Initially then, the quantum system and the ancillae are in the state
$$|Q\rangle|A_1A_2\rangle=|Q\rangle|00\rangle.$$
I'll be working in the "position-momentum" picture of measurement again, that is, the state I want to transfer from $Q$ to $A$ is the position $x$. And I'm going to jump right in and say that $Q$ is in a superposition $x+y$. After measurement, the system $QA_1A_2$ will then be
$$|QA_1A_2\rangle=|x,x,x\rangle+|y,y,y\rangle.$$
Note that I'm dispensing with normalizing the state. Because I'm not a mathematician, is why. I am allowed to be sloppy to get the point across.

This quantum state after measurement is pure, which you know of course means that it is perfectly "known", and has zero entropy:
$$\rho_{QA_1A_2}=|QA_1A_2\rangle\langle QA_1A_2|.$$
Yes, obviously something that is perfectly known has zero uncertainty. And indeed, any density matrix of the form $|.\rangle\langle.|$ has vanishing entropy. If you are still wondering why, wait until you see some mixed ("non-pure") states, and you'll know.

Now, you're no dummy. I know you know what comes next. Yes, we're not looking at the quantum system $Q$. We're looking at *you*, the measurement device! So we have to trace out the quantum system $Q$ to do that. Nobody's looking at that.

Quick note on "tracing out". I remember when I first heard that jargony terminology of "tracing over" (or "out") a system. It is a mathematical operation that reduces the dimension of a matrix by "removing" the degrees of freedom that are "not involved'. In my view, the only way to really "get" what is going on there is to do one of those "tracing outs" yourself. Best example is, perhaps, to take the joint density matrix of an EPR pair, and "trace out" one of the two elements. Once you've done this and seen the result, you'll know in your guts forever what all this means. If this was a class, I'd show you at least two examples. Alas, it's a blog.

So let's trace out the quantum system, which is not involved in this measurement, after all. (See what I did there?)
$$\rho_{A_1A_2}={\rm Tr}(\rho_{QA_1A_2})=|x,x\rangle\langle x,x|+|y,y\rangle\langle y,y|\;.$$
Hey, this is a mixed state! It has *two* of the $|.\rangle\langle.|$ terms. And if I had done the normalization like I'm supposed to, each one would have a "0.5" in front of it. 

Let's make a quick assessment of the entropies involve here. The entropy of the density matrix $\rho_{A_1A_2}$ is positive because it is a mixed state. But the entropy of the joint system was zero! Well, this is possible because someone you know has shown that conditional entropies can be negative:
$$S(QA_1A_2)=S(Q|A_1A_2)+S(A_1A_2)=0.$$
In the last equation, the left hand side has zero entropy because it is a pure state. The entropy of the mixed classical state (second term on right hand side) is positive, implying that the entropy of the quantum system given the measurement device (first term on the right hand side) is negative.

What about the measurement device itself? What is the shared entropy between all the "pieces" of the measurement device? Because I gave you only two pieces here, the calculation is much simpler than you might have imagined. I only have to calculate the shared entropy between $A_1$ and $A_2$. But that is trivial given the density matrix $\rho_{A_1A_2}$. Whatever $A_1$ shows, $A_2$ shows also: every single piece of the measurement device agrees with every other piece. Pure bliss and harmony!

Except when you begin to understand that this kumbaya of understanding may have nothing at all to do with the state of the quantum system! They may all sing the same tune, but the melody can be false. Like I said before: measurement devices can lie to you, and I'll now proceed to show you that they must.

The pieces of the measurement device are correlated, all right. A quick look at the entropy Venn diagram will tell you as much:
 Fig. 1: Venn diagram of the entropies in the measurement device made by the pieces $A_1$ and $A_2$.
Here, the entropy $S$ is the logarithm of the number of states that the device can possibly take on. A simple example is a device that can take on only two states, in which case $S=1$ bit. You can also imagine a Venn diagram of a measurement device with more than two pieces. If it is more than five your imagination may become fuzzy. The main thing to remember when dealing with classical measurement devices is that each piece of the device is exactly like any other piece. Once you know the state of one part, you know the state of all other parts. The device is of "one mind", not several. 

But we know, of course, that the pieces by themselves are not really classical, they are quantum. How come they look classical? 

Let's look at the entire system from a quantum information-theoretic point of view, not just the measurement device. The Venn diagram in question, of a quantum system $Q$ measured by a classical system $A$ that has two pieces $A_1$ and $A_2$ is
Fig. 2: Venn diagram of entropies for the full quantum measurement problem: including the quantum 
system $A$ and two "pieces" of the measurement device $A$.
Now, that diagram looks a bit baffling, so let's spend some time with it. There are a bunch of minus signs in there for conditional entropies, but they should not be baffling you, because you should be getting used to them by now. Remember, $A$ is measuring $Q$. Let's take a look at what the entropy Venn diagram between $A$ and $Q$ looks like:
Fig. 3: Entropic Venn diagram for quantum system $Q$ and measurement device $A$
That's right, $Q$ and $A$ are perfectly entangled, because that is what the measurement operation does when you deal with quantum systems: it entangles. This diagram can be obtained in a straightforward manner from the joint diagram just above, simply by taking $A$ to be the joint system $A_1A_2$. Then, the conditional entropy of $A$ (given $Q$) is the sum of the three terms $-S$, $S$, and $-S$, the shared entropy is the sum of the three terms $S$, 0, and $S$, and so on. And, if you ignore $Q$ (meaning you don't look at it), then you get back the classically correlated diagram (0,$S$,0) for $A_1$ and $A_2$ you see in Fig. 1.

But how much does the measurement device $A$ know about the quantum system? 

From the entropy diagram above, the shared quantum entropy is $2S$, twice as much as the classical device can have! That doesn't seem to make any sense, and that is because the Venn diagram above has quantum entanglement $2S$, which is not the same thing as classical information. Classical information is that which all the pieces of the measurement device agree upon. So let's find out how much of that shared entropy is actually shared with quantum system. 

That, my friends, is given by the center of the triple Venn diagram above (Fig. 2). And that entry happens to be zero!

"Oh, well", you rumble, "that must be due to how you chose to construct this particular measurement!"

Actually, no. That big fat zero is generic, it will always be there in any tri-partite quantum entropy diagram where the joint system (system and measurement device combined) are fully known. It is a LAW.
"What law is that?", you immediately question.

That law is nothing other than the law that prevents quantum cloning! The classical device cannot have any information about the quantum system, because as I said in a previous post, quantum measurement is actually impossible! What I just showed you is the mathematical proof of that statement.
"Hold on, hold on!"
What?
"Measurement reveals nothing about the quantum state? Ever?"

I can see you're alarmed. But rest assured: it depends. If quantum system $Q$ and classical measurement device $A$ (composed of a zillion subsystems $A_1,....A_{\rm zillion}$) are the only things there (meaning they form a pure state together), then yeah, you can't know anything. This law is really a quite reasonable one: sometimes it is called the law of "monogamy of entanglement". 

Otherwise, that is, if a bunch of other systems exist so that, if you trace over them the state and the measurement device are mixed, then you can learn something from your measurement. 

"Back up there for a sec. Monogamy what?"

Monogamy of entanglement? That's just a law that says that if one system is fully entangled with another (meaning their entropy diagram looks like Fig. 3), then they can't share that entanglement in the same intimate manner with a third party. Hence the big fat zero in the center of the Venn diagram in Fig. 2. If you think just a little about it, it will occur to you that if you could clone quantum states, then entanglement would not have to be monogamous at all.

"A second is all it takes?"

I said a little. It could be a second. It could be a tad longer. Stop reading and start thinking. I't not a very hard quiz.

"Oh, right, because if you're entangled with one system, and then if you could make a clone..."

Go on, you're on the right track!

"...then I could take a partner of a fully entangled pair, clone it, and then transform its partner into a new state (because if you clone the state you got to take its partner with it) so that the cloned state is entangled with the new state. Voilà, no more monogamy."

OK you, get back to your seat. You get an A.

So we learn that if a measurement leaves the quantum system in a state as depicted in Fig. 2, then nothing at all can be learned about the quantum state. But this is hardly if ever the case. In general, there are many other systems that the quantum system $A$ is entangled with. And if we do not observe them, then the joint system $QA$ is not pure, meaning that the joint entropy does not vanish. And in that case, the center of the triple quantum Venn diagram does not vanish. And because that center tells you how much information you can obtain about the quantum system in a classical measurement, that means that you can actually learn something about the quantum system, after all. 

But keep in mind: if the joint system $QA$ does not have exactly zero entropy, it is just a little bit classical, for reasons I discussed earlier in this series. Indeed, the whole concept of "environment-induced superselection" (or einselection) advocated by the quantum theorist Wojciech Zurek hinges precisely on such additional systems that interact with $QA$ (the "environment"), and that are being ignored ("traced over"). They "classicize" the quantum system, and allow you to extract some information about it. 

I do realize that "classicize" is not a word. Or at least, not a word usually used in this context.  Not until I gave it a new meaning, right?

With this, dear patient reader, I must end this particular post, as I have exceeded the length limit of single blog posts (a limit that I clearly have not bothered with in any of my previous posts). I know you are waiting for Schrödinger's cat. I'll give you that, along with your run-of-the-mill beam splitter, as well as the not-so-run-of-the-mill quantum eraser, in Part 6.

"Quantum eraser? Is that some sort of super hero, or are you telling me that you can reverse a quantum measurement?" 

Oh, the quantum eraser is real. The super hero, however, is Marlan Scully, and you'll learn all about it in Part 6.




Tuesday, October 7, 2014

Nifty papers I wrote that nobody knows about (Part 4: Complex Langevin equation)

This is the last installment of the "Nifty Papers" series. Here are the links to Part1, Part2, and Part 3.

For those outside the computational physics community, the following words don't mean anything: 


For those others that have encountered the problem, these words elicit terror. They stand for sleepless nights. They spell despair. They make grown men and women weep helplessly. The Sign Problem. 

OK, I get it, you're not one of those. So let me help you out.

In computational physics, one of the main tools people use to calculate complicated quantities is the Monte Carlo method. The method relies on random sampling of distributions in order to obtain accurate estimates of means. In the lab where I was a postdoc from 1992-1995, the Monte Carlo methods was used predominantly to calculate the properties of nuclei, using a shell model approach. 

I can't get into the specifics of the Monte Carlo method in this post, not the least because such an exposition would loose me a good fraction of what viewers/readers I have left at this point. Basically, it is a numerical method to calculate integrals (even though it can be used for other things too). It involves sampling the integrand and summing the terms. If the integrand is strongly oscillating (lots of high positives and high negatives), then the integral may be slow to converge. Such integrals appear in particular when calculating expectation values in strongly interacting systems, such as for example big nuclei. And yes, the group I had joined as a postdoc at that point in my career specialized in calculating properties of large nuclei computationally using the nuclear shell model. These folks would battle the sign problem on a daily basis.  

And while as a Fairchild Prize Fellow (at the time it was called the "Division Prize Fellowship", because Fairchild did not at that time want their name attached) I could work on anything I wanted (and I did!), I also wanted to do something that would make the life of these folks a little easier. I decided to try to tackle the sign problem. I started work on this problem in the beginning of 1993 (the first page of my notes reproduced below is dated February 9th, 1993, shortly after I arrived).

The last calculation, pages 720-727 of my notes, is dated August 27th, 1999, so I clearly took my good time with this project! Actually, it lay dormant for about four years as I worked on digital life and quantum information theory. But my notes were so detailed that I could pick the project back up in 1999.


The idea to use the complex Langevin equation to calculate "difficult" integrals is not mine, and not new (the literature on this topic goes back to 1985, see the review by Gausterer [1]). I actually had the idea without knowing these papers, but this is neither here nor there. I was the first to apply the method to the many-fermion problem, where I also was able to show that the complex Langevin (CL) averages converge reliably. Indeed, the CL method was, when I began working on it, largely abandoned because people did not trust those averages. But enough of the preliminaries. Let's jump into the mathematics.

Take a look at the following integral:

$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}\cos(\sigma z).$$
This integral looks very much like the Gaussian integral
$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}=1,$$
except for that cosine function. The exact result for the integral with the cosine function is (trust me there, but of course you can work it out yourself if you feel like it) 
$$e^{-(1/2)z^2}.$$
This result might surprise you, as the integrand itself (on account of the cos function) oscillates a lot:
The integrand $\cos(10x)e^{-1/2 x^2}$
The factor $e^{-(1/2) \sigma^2}$ dampens these oscillations, and in the end the result is simple: It is as if the cosine function wasn't even there, and just replaces $\sigma$ by $z$.  But a Monte Carlo evaluation of this integral runs into the sign problem when $z$ gets large and the oscillations become more and more violent. The numerical average converges very very slowly, which means that your computer has to run for a very long time to get a good estimate.

Now imagine calculating an expectation value where this problem occurs both in the numerator and the denominator. In that case, we have to deal with small but weakly converging averages both in the numerator and denominator, and the ratio converges even more slowly. For example, imagine calculating the "mean square"
The denominator of this ratio (for $N=1$) is the integral we looked at above. The numerator just has an extra $\sigma^2$ in it. The $N$ ("particle number") is there to just make things worse if you choose a large one, just as in nuclear physics larger nuclei are harder to calculate. I show you below the result of calculating this expectation value using the Monte Carlo approach (data with error bars), along with the analytical exact result (solid line), and as inset the average "sign" $\Phi$ of the calculation. The sign here is just the expectation value of
$$\Phi(z)=\frac{\cos(\sigma z)}{|\cos(\sigma z)|}$$

You see that for increasing $z$, the Monte Carlo average becomes very noisy, and the average sign disappears. For a $z$ larger than three, this calculation is quite hopeless: sign 1, Monte Carlo 0.

I want to make one thing clear here: of course you would not use the Monte Carlo method to calculate this integral if you can do it "by hand" (as you can for the example I show here). I'm using this integral as a test case, because the exact result is easy to get. The gist is: if you can solve this integral computationally, maybe you can solve those integrals for which you don't know the answer analytically in the same manner. And then you solve the sign problem. So what other methods are there?

The solution I proposed was using the complex Langevin equation. Before moving to the complex version (and why), let's look at using the real Langevin equation to calculate averages. The idea here is the following. When you calculate an integral using the Monte Carlo approach, what you are really doing is summing over a set of points that are chosen such that you reject (probabilistically) those that are not close to the integrand--and you accept those that are close, again probabilistically, which creates a sequence of random samples that approximates the probability distribution that you want to integrate. 

But there are other methods to create sequences that appear to be drawn from a given probability distribution. One is the Langevin equation which I'm going to explain. Another is the Fokker-Planck equation, which is related to the Langevin equation but that I'm not going to explain. 

Here's the theory (not due to me, of course), on how you use the Langevin equation to calculate averages. Say you want to calculate the expectation value of a function $O(\sigma)$. To do that, you need to average $O(\sigma)$, which means you sum (and by that I mean integrate), this function over the probability that you find $\sigma$. The idea here is that $\sigma$ is controlled by a physical process: $\sigma$ does not change randomly, but according to some laws of physics. You want to know the average $O$, which depends on $\sigma$, given that $\sigma$ changes according to some natural process.

If you think about it long enough, you realize that many many things in physics boil down to calculating averages just like that. Say, the pressure at room temperature given that the molecules are moving according to the known laws of physics. Right, almost everything in physics, then. So you see, being able to do this is important. Most of the time, Monte Carlo will serve you just fine. We are dealing with all the other cases here. 

First, we need to make sure we capture the fact that the variable $\sigma$ changes according to some physical law. When you are first exposed to classical mechanics, you learn that the time development of any variable is described by a Lagrangian function (and then you move on to the Hamiltonian so that you are prepared to deal with quantum mechanics, but we won't go there here). The integral of the Langrangian is called the "action" $S$, and that is the function that is used to quantify how likely any variable $\sigma$ is given that it follows these laws. For example, if you are particle following the laws of gravity, then I can write down for you the Lagrangian (and hence the action) that makes sure the particles follow the law. It is $L=-\frac12m v^2+mV(\sigma)$, where $m$ is the mass, and $v$ is the velocity of the $\sigma$ variable, $v=d\sigma/dt$,  and $V(\sigma)$ is the gravitational potential.

The action is $S=\int dt L(\sigma(t)) dt$, and the equilibrium distribution of $\sigma$ is
$$P(\sigma)=\frac1Z e^{-S}$$ where $Z$ is the partition function $Z=\int e^{-S}d\sigma$.

In computational physics, what you want is a process that creates this equilibrium distribution, because if you have it, then you can just sum over the variables so created and you have your integral. Monte Carlo is one method to create that distribution. We are looking for another. 

It turns out that the Langevin equation
$$\frac{d\sigma}{dt}=-\frac12 \frac{dS}{d\sigma}+\eta(t)$$
creates precisely such a process. Here, $S$ is the action for the process, and $\eta(t)$ is a noise term with zero mean and unit variance:
$$\langle \eta(t)\eta(t^{\prime})\rangle=\delta(t-t^\prime).$$
Note that $t$ here is a "ficitious" time: we use it only to create a set of $\sigma$s that are distributed according to the probability distribution $P(\sigma)$ above. If we have this ficitious time series $\sigma_0$ (the solution to the differential equation above), then we can just average the observable $O(\sigma)$:
$$\langle O\rangle=\lim_{T\to\infty}\frac1T\int_0^T\sigma_0(t)dt$$
Let's try the "Langevin approach" to calculating averages on the example integral $\langle \sigma^2\rangle_N$ above. The action we have to use is
$$S=\frac12 \sigma^2-N\ln [\cos(\sigma z)]$$ so that $e^{-S}$ gives exactly the integrand we are looking for. Remember, all expectation values are calculated as
$$\langle O\rangle=\frac{\int O(\sigma) e^{-S(\sigma)}d\sigma}{\int e^{-S(\sigma)}d\sigma}.$$

With that action, the Langevin equation is
$$\dot \sigma=-\frac12(\sigma+Nz\tan(\sigma z))+\eta \ \ \ \      (1)$$
This update rule creates a sequence of $\sigma$ that can be used to calculate the integral in question.

And the result is ..... a catastrophe! 

The average does not converge, mainly because in the differential equation (1), I ignored a drift term that goes like $\pm i\delta(\cos(z\sigma))$. That it's there is not entirely trivial, but if you sit with that equation a little while you'll realize that weird stuff will happen if the cosine is zero. That term throws the trajectory all over the place once in a while, giving rise to an average that simply will not converge.

In the end, this is the sign problem raising its ugly head again. You do one thing, you do another, and it comes back to haunt you. Is there no escape?

You've been reading patiently so far, so you must have suspected that there is an escape. There is indeed, and I'll show it to you now.

This simple integral that we are trying to calculate
$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}\cos(\sigma z),$$
we could really write it also as
$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}e^{iz},$$
because the latter integral really has no imaginary part. Because the integral is symmetric. 

This is the part that you have to understand to appreciate this article. And as a consequence this blog post.  If you did, skip the next part. It is only there for those people that are still scratching their head.

OK: here's what you learn in school: $e^{iz}=\cos(z)+i\sin(z)$. This formula is so famous, it even has its own name. It is called Euler's formula. And $\cos(z)$ is a symmetric function (it remains the same if you change $z\to-z$), while $\sin(z)$ is anti-symmetric ($\sin(-z)=-\sin(z)$). An integral from $-\infty$ to $\infty$ will render any asymmetric function zero: only the symmetric parts remain. Therefore, $\int_{-\infty}^\infty e^{iz}= \int_{-\infty}^\infty \cos(z)$. 

This is the one flash of brilliance in the entire paper: that you can replace a cos by a complex exponential if you are dealing with symmetric integrals. Because this changes everything for the Langevin equation (it doesn't do that much for the Monte Carlo approach). The rest was showing that this worked also for more complicated shell models of nuclei, rather than the trivial integral I showed you. Well, you also have to figure out how to replace oscillating functions that are not just a cosine, (that is, how to extend arbitrary negative actions into the complex plane) but in the end, it turns out that this can be done if necessary.

But let's first see how this changes the Langevin equation. 

Let's first look at the case $N=1$. The action for the Langevin equation was 
$$S=\frac12\sigma^2-\log\cos(\sigma z)$$
If you replace the cos, the action instead becomes
$$S=\frac12\sigma^2\pm i\sigma z.$$ The fixed point of the differential equation (1), which was on the real line and therefore could hit the singularity $\delta(\cos(z\sigma))$, has now moved into the complex plane. 

And in the complex plane there are no singularities! Because they are all on the real line! As a consequence, the averages based on the complex action should converge! The sign problem can be vanquished just by moving to the complex Langevin equation!

And that explains the title of the paper. Sort of. In the figure below, I show you how the complex Langevin equation fares in calculating that integral that, scrolling up all the way, gave rise to such bad error bars when using the Monte Carlo approach. And the triangles in that plot show the result of using a real Langevin equation. That's the catastrophe I mentioned: not only is the result wrong. It doesn't even have large error bars, so it is wrong with conviction! 

The squares (and the solid line) come from using the extended (complex) action in the Langevin equation. It reproduces the exact result precisely.


Average calculated with the real Langevin equation (triangles) and the complex Langevin equation (squares), as a function of the variable $z$. The inset shows the "sign" of the integral, which still vanishes at large $z$ even as the complex Langevin equation remains accurate.
The rest of the paper is somewhat anti-climactic. First I show that the same trick works in a quantum-mechanical toy model of rotating nuclei (as opposed to the trivial example integral). I offer the plot below from the paper as proof:
Solid line is exact theory, symbols are my numerical estimates. You've got to hand it to me: Complex Langevin rules.

But if you want to convince the nuclear physicists, you have to do a little bit more than solve a quantum mechanical toy model. Short of solving the entire beast of the nuclear shell model, I decided to tackle something in between: the Lipkin model (sometimes called the Lipkin-Meshkov-Glick model), which is a schematic nuclear shell model that is able to describe collective effects in nuclei. And the advantage is that exact analytic solutions to the model exist, which I can use to compare my numerical estimates to.

The math for this model is far more complicated and I spare you the exposition for the sake of sanity here. (Mine, not yours). A lot of path integrals to calculate. The only thing I want to say here is that in this more realistic model, the complex plane is not entirely free of singularities: there are in fact an infinity of them. But they naturally lie in the complex plane, so a random trajectory will avoid them almost all of the time, whereas you are guaranteed to run into them if they are on the real line and the dynamics return you to the real line without fail. That is, in a nutshell, the discovery of this paper. 

So, this is obviously not a well-known contribution. This is a bit of a bummer, because the sign problem still very much exists, in particular in lattice gauge theory calculations of matter at finite chemical potential (meaning, at finite density). Indeed, a paper came out just recently (see the arXiv link in case you ended up behind a paywall) where the authors try to circumvent the sign problem in lattice QCD at finite density by doing the calculations explicitly at high temperature using the old trick of doubling your degrees of freedom. Incidentally, this is the same trick that gives you black holes at Hawking temperature, because the event horizon naturally doubles degrees of freedom. I used this trick a lot when calculating Feynamn diagrams in QCD at finite temperature. But that's a fairly well-known paper, so I can't discuss it here. 

Well, maybe some brave soul one day rediscovers this work, and  writes a "big code" that solves the problem once and for all using this trick. I think the biggest reason why this paper never got any attention is that I don't write big code. I couldn't apply this to a real-world problem, because to do that you need mad software engineering skills. And I don't have those, as anybody who knows me will be happy to tell you. 

So there this work lingers. Undiscovered. Lonely. Unappreciated. Like sooo many other papers by sooo many other researchers over time. If only there was a way that old papers like that could get a second chance! If only :-)

[1] H. Gausterer, Complex Langevin: A numerical Method? Nuclear Physics A 642 (1998) 239c-250c.
[2] C. Adami and S.E. Koonin, Complex Langevin equation and the many-fermion problem. Physical Review C 63 (2001) 034319.