Eqs

Saturday, May 23, 2015

What happens to an evaporating black hole?

For years now I have written about the quantum physics of black holes, and each and every time I have pushed a single idea: that if black holes behave as (almost perfect) black bodies, then they should be described by the same laws as black bodies are. And that means that besides the obvious processes of absorption and reflection, there is the quantum process of spontaneous emission (discovered to occur in black holes by Hawking), and this other process, called stimulated emission (neglected by Hawking, but discovered by Einstein). The latter solves the problem of what happens to information that falls into the black hole, because stimulated emission makes sure that a copy of that information is always available outside of the black hole horizon (the stories are a bit different for classical vs. quantum information. These stories are told in a series of posts on this blog:

Oh these rascally black holes (Part I)
Oh these rascally black holes (Part II)
Oh these rascally black holes (Part III)
Black holes and the fate of quantum information
The quantum cloning wars revisited 

I barely ever thought about what happens to a black hole if nothing is falling in it. We all know (I mean, we have been told) that the black hole is evaporating. Slowly, but surely. Thermodynamic calculations can tell you how fast this evaporation process is: the rate of mass loss is inversely proportional to the square of the black hole mass. 

But there is no calculation of the entropy (and hence the mass) of the black hole as a function of time!

Actually, I should not have said that. There are plenty of calculations of this sort. There is the CGHS Model, the JT Model, and several others. But these are models of quantum gravity in which the scalar field of standard curved space quantum field theory (CSQFT, the theory developed by Hawking and others to understand Hawking radiation) is coupled in one way or the other to another field (often the dilaton).You cannot calculate how the black hole loses its mass in standard CSQFT, because that theory is a free field theory! Those quantum fields interact with nothing! 

The way you recover the Hawking effect in a free field theory is you consider not a mapping of the vacuum from time $t=0$ to a finite time $t$, you map from past infinity to future infinity. So time disappears in CSQFT! Wouldn't it be nice if we had a theory that in some limit just becomes CSQFT, but allows us to explicitly couple the black hole degrees of freedom to the radiation degrees of freedom, so that we could do a time-dependent calculation of the S-matrix? 

Well this post serves to announce that we may have found such a theory ("we" is my colleague Kamil Brádler and I). The link to the arXiv article will be below, but before you sneak a peek let me first put you in the right mind to appreciate what we have done.

In general, when you want to understand how a quantum state evolves forward in time, from time $t_1$ to time $t_2$, say, you write
$$|\Psi(t_2)\rangle=U(t_1,t_2)|\Psi(t_1)\rangle\ \ \     (1)$$
where $U$ is the unitary time evolution operator
$$U(t_2,t_1)=Te^{-i\int_{t_1}^{t_2}H(t')dt'}\ \ \      (2)$$
The $H$ is of course the interaction Hamiltonian, which describes the interaction between quantum fields. The $T$ is Dyson's time-ordering operator, and assures that products of operators always appear in the right temporal sequence. But the interaction Hamiltonian $H$ does not exist in free-field CSQFT.

In my previous papers with Bradler and with Ver Steeg, I hinted at something, though. There we write this mapping from past to future infinity in terms of a Hamiltonian (oh, the wrath that this incurred from staunch relativists!) like so:
$$|\Psi_{\rm out}\rangle=e^{-iH}|\Psi_{\rm in}\rangle\ \ \     (3)$$
where $|\Psi_{\rm in}\rangle$ is the quantum state at past infinity, and $|\Psi_{\rm out}\rangle$ is at future infinity. This mapping really connects creation and annihilation operators via a Bogoliubov transformation
$$A_k=e^{-iH}a_ke^{iH}\ \ \ (4)$$
where the $a_k$ are defined on the past null infinity time slice, and the $A_k$ at future null infinity, but writing it as (3) makes it almost look as if $H$ is a Hamiltonian, doesn't it? Except there is no $t$. The same $H$ is in fact used in quantum optics a lot, and describes squeezing. I added to this a term that allows for scattering of radiation modes on the horizon in the 2014 article with Ver Steeg, and that can be seen as a beam spliter in quantum optics. But it is not an interaction operator between black holes and radiation. 

For the longest time, I didn't know how to make time evolution possible for black holes, because I did not know how to write the interaction. Then I became aware of a paper by Paul Alsing from the Air Force Research Laboratory, who had read my paper on the classical capacity of the quantum black hole channel, repeated all of my calculations (!), and realized that there exists, in quantum optics, an extension to the Hamiltonian that explicitly quantizes the black hole modes! (Paul's background is quantum optics, so he was perfectly positioned to realize this.)

Because you see, the CSQFT that everybody is using since Hawking is really a semi-classical approximation to quantum gravity, where the black hole "field" is static. It is not quantized, and it does not change. It is a background field. That's why the black hole mass and entropy change cannot be calculated. There is no back-reaction from the Hawking radiation (or the stimulated radiation for that matter), on the black hole. In the parlance of quantum optics, this approximation is called the "undepletable pump"  scenario. What pump, you ask?

In quantum optics, "pumps" are used to create excited states of atoms. You can't have lasers, for example, without a pump that creates and re-creates the inversion necessary for lasing. The squeezing operation that I talked about above is, in quantum optics, performed via parametric downconversion, where a nonlinear crystal is used to split photons into pairs like so:
Fig. 1: Spontaneous downconversion of a pump beam into a "signal" and an "idler" beam. Source: Wikimedia
Splitting photons? How is that possible? Well it is possible because of stimulated emission! Basically, you are seeing the quantum copy machine at work here, and this quantum copy machine is "as good as it gets" (not perfect, in other words, because you remember of course that perfect quantum copying is impossible). So now you see why there is such a tantalizing equivalence between black holes and quantum optics: the mathematics describing spontaneous downconversion and black hole physics is the same: eqs (3) and (4). 

But these equations do not quantize the pump, it is "undepleted" and remains so. This means that in this description, the pump beam is maintained at the same intensity. But quantum opticians have learned how to quantize the pump mode as well! This is done using the so-called "tri-linear Hamiltonian": it has quantum fields not just for the signal and idler modes (think of these as the radiation behind and in front of the horizon), but for the pump mode as well. Basically, you start out with the pump in a mode with lots of photons in, and as they get down-converted the pump slowly depletes, until nothing is left. This will be the model of black hole evaporation, and this is precisely the approach that Alsing took, in a paper that appeared in the journal "Cassical and Quantum Gravity" last year. 

"So Alsing solved it all", you are thinking, "why then this blog post?" 

Not so fast. Alsing brought us on the right track, to be sure, but his calculation of the quantum black hole entropy as a function of time displayed some weird features. The entropy appeared to oscillate rather than slowy decrease. What was going on here?

For you to appreciate what comes now, I need to write down the trilinear Hamiltonian:
$$H_{\rm tri}=r(ab^\dagger c^\dagger-a^\dagger bc)\ \ \ (5) $$.
Here, the modes $b$ and $c$ are associated with radiation degrees in front of and behind the horizon, whereas $a$ is the annihilation operator for black hole modes (the "pump" modes). Here's a pic so that you can keep track of these.
Fig. 2: Black hole and radiation modes $b$ and $c$.
In the semi-classical approximation, the $a$ modes are replaced with their background-field expectation value, which morphs $H_{\rm tri}$ into $H$ in eqs. (3) and (4), so that's wonderful: the trilinear Hamiltonian turns into the Hermitian operator implementing Hawking's Bogoliubov transformation in the semi-classical limit. 

But how you do you use $H_{\rm tri}$ to calculate the S-matrix I wrote down long ago, at the very beginning of this blog post? One thing you could do is to simply say, 
$$U_{\rm tri}=e^{iH_{\rm tri}t}\ ,$$
and then the role of time is akin to a linearly increasing coefficient $r$ in eq. (5). That's essentially what Alsing did (and Nation and Blencowe before him, see also Paul Nation's blog post about it) but that, it turns out, is only a rough approximation of the true dynamics, and does not give you the correct result, as we will see. 

Suppose you calculate $|\Psi_{\rm out}\rangle=e^{-iH_{\rm tri}t}|\Psi_{\rm in}\rangle$, and using the density matrix $\rho_{\rm out}=|\Psi_{\rm out}\rangle \langle \Psi_{\rm out}|$ you calculate the von Neumann entropy of the black hole modes as
$$S_{\rm bh}=-{\rm Tr} \rho_{\rm out}\log \rho_{\rm out}\ \ \ (6)$$
Note that this entropy is exactly equal to the entropy of the radiation modes $b$ together with $c$, as the initial black hole is in a pure state with zero entropy. 

How can a black hole that starts with zero entropy lose entropy, you ask? 

That's a good question. We begin at $t=0$ with a black hole in a defined state of $n$ modes (the state $|\Psi_{\rm in}\rangle=|n\rangle$) for convenience of calculation. We could instead start in a mixed state, but the results would not be qualitatively different after the black hole has evolved for some time, yet the calculation would be much harder. Indeed, after interacting with the radiation the black hole modes become mixed anyway, and so you should expect the entropy to start rising from zero quickly at first, and only after it approached its maximum value would it decay. That is a behavior that black hole folks are fairly used to, as a calculation performed by Don Page in 1993 shows essentially (but not exactly) this behavior. 

Page constructed an entirely abstract quantum information-theoretic scenario: suppose you have a pure bi-partite state (like we start out with here, where the black hole is one half of the bi-partitite state and the radiation field $bc$ is the other), and let the two systems interact via random unitaries. Basically he asked: "What is the entropy of a subsystem if the joint system was in a random state?" The answer, as a function of the (log of the) size of the dimension of the radiation subsystem is shown here:
Fig. 3: Page curve (from [1]) showing first the increase in entanglement entropy of the black hole, and then a decrease back to zero. 
People usually assume that the dimension of the radation subsystem (dubbed by Page the "thermodynamic entropy" (as opposed to the entanglement entropy) is just a proxy for time, so that what you see in this "Page curve" is how at first the entropy of the black hole increases with time, then turns around at the "Page time", until it vanishes.

This calculation (which has zero black hole physics in it) turned out to be extremely useful, as it showed that the amount of information from the black hole (defined as the maximum entropy minus the entanglement entropy) may take a long time to come out (namely at least the Page time), and it would be essentially impossible to determine from the radiation field that the joint field is actually in a pure state. But as I said, there is no black hole physics in it, as the random unitaries used in that calculation were, well, random.  

Say you use the $U_{\rm tri}$ instead for the interaction? This is essentially the calculation that Alsing did, and it turns out to be fairly laborious, because as opposed to the bi-linear Hamiltonian that can be solved analyically, you can't do that with $U_{\rm tri}$. Instead, you have to either expand $H_{\rm tri}$ in $rt$ (that really only works for very short times) or use other methods. Alsing used an approximate partial differential equation approach for the quantum amplitude $c_n(t)=\langle n|e^{-iH_{\rm tri}t}|\Psi_{\rm in}\rangle$. The result shows the increase of the black hole entropy with time as expected, and then indeed a decrease:
Fig. 4: Black hole entropy using (6) for $n$=16 as a function of $t$
Actually, the figure above is not from Alsing (but very similar to his), but rather is one that Kamil Brádler made, but using a very different method. Brádler figured out a method to calculate the action of $U_{\rm tri}$ on a vacuum state using a sophisticated combinatorial approach involving something called a "Dyck path". You can find this work here. It reproduces the short-time result above, but allows him to go much further out in time, as shown here:
Fig. 5: Black hole entropy as in Fig. 4, at longer times. 
The calculations shown here are fairly intensive numerical affairs, as in order to get converging results, up to 500 terms in the Taylor expansion have to be summed. This result suggests that the black hole entropy is not monotonically decreasing, but rather is oscillating, as if the black hole was absorbing modes from the surrounding radiation, then losing them again. However, this is extremely unlikely physically, as the above calculation is performed in the limit of perfectly reflecting black holes. But as we will see shortly, this calculation does not capture the correct physics to begin with. 

What is wrong with this calculation? Let us go back to the beginning of this post, the time evolution of the quantum state in eqs. (1,2).  The evolution operator $U(t_2,t_1)=Te^{-i\int_{t_1}^{t_2}H(t')dt'}$ is applied to the initial state gives rise to an integral over the state space: a path integral. How did that get replaced by just $e^{-iHt}$? 

We can start by discretizing the integral into a sum, so that $\int_0^t H(t')dt'\approx\sum_{i=0}^NH(t_i)\Delta t$, where $\Delta t$ is small, and $N\Delta t=t$. And because that sum is in the exponent, $U$ actually turns into a product:
$$U(0,t)\approx \Pi_{i=0}^N e^{-i\Delta t H(t_i)}\ \ \ (7)$$
Because of the discretization, each Hamiltonian $H(t_i)$ acts on a different Hilbert space, and the ground state that $U$ acts on now takes the form of a product state of time slices
$$|0\rangle_{bc}=|0\rangle_1|0\rangle_2\times...\times |0\rangle_N$$
And because of the time-ordering operator, we are sure that the different terms of $U(0,t)$ are applied in the right temporal order. If all this seems strange and foreign to you, let me assure you that this is a completely standard approximation of the path integral in quantum many-body physics. In my days as a nuclear theorist, that was how we calculated expectation values in the shell model describing heavy nuclei. I even blogged about this approach (the Monte Carlo Path Integral approach) in the post about nifty paper that nobody is reading. (Incidentally, nobody is reading those posts either).  

And now you can see why Alsing's calculation (and Bradler's initial recalculation of the same quantity with very different methods, confirming Alsing's result) was wrong: it represents an approximation of (7) using a single time-slice only ($N$=1). This approximation has a name in quantum many-body physics, it is called the "Static Path Approximation" (SPA). The SPA can be accurate in some cases, but it is generally only expected to be good at small times. At larger times, it ignores the self-consistent temporal fluctuations that the full path integral describes.

So now you know what we did, of course: we calculated the path integral of the S-matrix of the black hole interacting with the radiation field using many many time slices. Kamil was able to do several thousand time slices, just to make sure that the integral converges. And the result looks very different from the SPA. Take a look at the figure below, where we calculated the black hole entropy as a function of the number of time slices (which is our discretized time)
Fig. 6: Black hole entropy as a function of time, for three different initial number of modes. Orange: $n$=5, Red: $n$=20, Pink: $n$=50. Note that the logarithm is taken to the base $n+1$, to fit all three curves on the same plot. Of course the $n=50$ entropy is much larger than the $n=5$ entropy. $\Delta t=1/15$. 
This plot shows that the entropy quickly increase as the pure state decoheres, and then starts to drop because of evaporation. Obviously, if we would start with a mixed state rather than a pure state, the entropy would just drop. The rapid increase at early times is just a reflection of our short-cut to start with a pure state. It doesn't look exactly like Page's curves, but we cannot expect that as our $x$-axis is indeed time, while Page's was thermodynamic entropy (which is expected to be linear in time). Note that Kamil repeated the calculation using an even smaller $\Delta t=1/25$, and the results do not change.

I want to throw out some caution here. The tri-linear Hamiltonian is not derived from first principles (that is, from a quantum theory of gravity). It is a "guess" at what the interaction term between quantized black hole modes and radiation modes might look like. The guess is good enough that it reproduces standard CSQFT in the semi-classical limit, but it is still a guess. But it is also very satisfying that such a guess allows you to perfrom a straightforward calculation of black hole entropy as a function of time, showing that the entropy can actually get back out. One of the big paradoxes of black hole physics was always that as the black hole mass shrunk, all calculations implied that the entanglement entropy steadily increases and never turns over as in Page's calculation. This was not a tenable situation for a number of physical reasons (and this is such a long post that I will spare you these). We have now provided a way in which this can happen. 

So now you have seen with your own eyes what may happen to a black hole as it evaporates. The entropy can indeed decrease, and within a simple "extended Hawking theory", all of it gets out. This entropy is not information mind you, as there is no information in a black hole unless you throw some in it (see my series "What is Information?" if this is cryptic for you). But Steve Giddings convinced me (on the beach at Vieques no less, see photo below) that solving the infomation paradox was not enough: you've got to solve the entropy paradox also. 

A quantum gravity session at the beach in Vieques, Puerto Rico (January 2014). Steve Giddings is in sunglasses watching me explain stimulated emission in black holes. 
I should also note that there is a lesson in this calculation for the firewall folks (who were quite vocal at the Vieques meeting). Because the entanglement between the black hole and radiation involves three entities rather than two, monogamy of entanglement can never be violated, so this argument provides another (I have shown you two others in earlier posts) arguments against those silly firewalls.

The paper describing these results is on arXiv:

K. Brádler and C. Adami: One-shot decoupling and Page curves from a dynamical model for black hole evaporation

[1] Don Page. Average entropy of a subystem. Phys. Rev. Lett. 71 (1993) 1291.



Friday, December 5, 2014

Life and Information

I wrote a blog post about "information-theoretic considerations concerning life and its origins" for PBS's Blog "The Nature of Reality", but as they own the copyright to that piece, I cannot reproduce it here. You are free to follow the link, though: "Living Bits: Information and the Origin of Life".

I'm not complaining: the contract I signed clearly spelled out ownership. I can ask for permission to reproduce it, and I may. 

The piece is based in part on an article that is currently in review. You can find the arxiv version here, and some other bloggers comments here and here

Creationists also had something to say about that article, but I won't link any of it here. After all, this is a serious blog. 

Sunday, November 9, 2014

On quantum measurement (Part 5: Quantum Venn diagrams)

Here's what you missed, in case you have stumbled into this series midway. As you are wont to do, of course.

Part 1 had me reminiscing about how I got interested in the quantum measurement problem, even though my Ph.D. was in theoretical nuclear physics, not "foundational" stuff, and introduced the incomparable Hans Bethe, who put my colleague Nicolas Cerf and I on the scent of the problem.

Part 2 provides a little bit of historical background. After all, a bunch of people have thought about quantum measurement, and they are all rightfully famous: Bohr, Einstein, von Neumann. Two of those three are also heros of mine. Two, not three.

Part 3 starts out with the math of classical measurement, and then goes on to show that quantum mechanics can't do anything like that, because no-cloning. Really: the no-cloning theorem ruins quantum measurement. Read about it if you don't believe me. 

Part 4 goes further. In that part you learn that measuring something in quantum physics means not looking at the quantum system, and that classical measurement devices are, in truth,  really, really large quantum measurement devices, whose measurement basis is statistically orthogonal to the quantum system (on account of them being very high-dimensional). But that you should still respect their quantumness, which Bohr did not.  

Sometimes I wonder how our undertanding of quantum physics would be if Bohr had never lived. Well, come to think of it, perhaps I would not be writing this, as Bohr actually gave Gerry Brown his first faculty position at the NORDITA in Copenhagen, in 1960 (yes, before I was even born).  And it was in Gerry's group where I got my Ph.D., which led to everything else. So, if Niels Bohr had never lived, we would all understand quantum mechanics a little better, and this blog would not only never have been written, but also be altogether unnecessary? So, when I wonder about such things, clearly I am wasting everybody's time.

All right, let's get back to the problem at hand. I showed you how Born's rule emerges from not looking. Not looking at the quantum system, that is, which of course you never do because you are so focused on your classical measurement device (that you fervently hope will reveal to you the quantum truth). And said measurement device then proceeds to lie to you by not revealing the quantum truth, because it can't. Let's study this mathematically.  

First, I will change a bit the description of the measurement process from what I showed you in the previous post, where a quantum system (to be measured) was entangled with another measurement device (which intrinsically is also a quantum system). One of the two (system and measurement device) has a special role (namely we are going to look at it, because it looks classical). Rather than describing that measurement device by $10^{23}$ qubits measuring that lonely quantum bit, I'm going to describe the measurement device by two bits. I'm doing that so that I can monitor the consistency of the measurement device: after all, each and every fibre of the measurement device should confidently tell the same story, so all individual bits that make up the measurement device should agree. And if I show you only two of the bits and their correlation, that's because it is simpler than showing all $10^{23}$, even though the calculation including all the others would be exactly the same. 

All right, let's do the measurement with three systems: the quantum system $Q$, and the measurement devices (aka ancillas, see previous posts for explanation of that terminology) $A_1$ and $A_2$. 

Initially then, the quantum system and the ancillae are in the state
$$|Q\rangle|A_1A_2\rangle=|Q\rangle|00\rangle.$$
I'll be working in the "position-momentum" picture of measurement again, that is, the state I want to transfer from $Q$ to $A$ is the position $x$. And I'm going to jump right in and say that $Q$ is in a superposition $x+y$. After measurement, the system $QA_1A_2$ will then be
$$|QA_1A_2\rangle=|x,x,x\rangle+|y,y,y\rangle.$$
Note that I'm dispensing with normalizing the state. Because I'm not a mathematician, is why. I am allowed to be sloppy to get the point across.

This quantum state after measurement is pure, which you know of course means that it is perfectly "known", and has zero entropy:
$$\rho_{QA_1A_2}=|QA_1A_2\rangle\langle QA_1A_2|.$$
Yes, obviously something that is perfectly known has zero uncertainty. And indeed, any density matrix of the form $|.\rangle\langle.|$ has vanishing entropy. If you are still wondering why, wait until you see some mixed ("non-pure") states, and you'll know.

Now, you're no dummy. I know you know what comes next. Yes, we're not looking at the quantum system $Q$. We're looking at *you*, the measurement device! So we have to trace out the quantum system $Q$ to do that. Nobody's looking at that.

Quick note on "tracing out". I remember when I first heard that jargony terminology of "tracing over" (or "out") a system. It is a mathematical operation that reduces the dimension of a matrix by "removing" the degrees of freedom that are "not involved'. In my view, the only way to really "get" what is going on there is to do one of those "tracing outs" yourself. Best example is, perhaps, to take the joint density matrix of an EPR pair, and "trace out" one of the two elements. Once you've done this and seen the result, you'll know in your guts forever what all this means. If this was a class, I'd show you at least two examples. Alas, it's a blog.

So let's trace out the quantum system, which is not involved in this measurement, after all. (See what I did there?)
$$\rho_{A_1A_2}={\rm Tr}(\rho_{QA_1A_2})=|x,x\rangle\langle x,x|+|y,y\rangle\langle y,y|\;.$$
Hey, this is a mixed state! It has *two* of the $|.\rangle\langle.|$ terms. And if I had done the normalization like I'm supposed to, each one would have a "0.5" in front of it. 

Let's make a quick assessment of the entropies involve here. The entropy of the density matrix $\rho_{A_1A_2}$ is positive because it is a mixed state. But the entropy of the joint system was zero! Well, this is possible because someone you know has shown that conditional entropies can be negative:
$$S(QA_1A_2)=S(Q|A_1A_2)+S(A_1A_2)=0.$$
In the last equation, the left hand side has zero entropy because it is a pure state. The entropy of the mixed classical state (second term on right hand side) is positive, implying that the entropy of the quantum system given the measurement device (first term on the right hand side) is negative.

What about the measurement device itself? What is the shared entropy between all the "pieces" of the measurement device? Because I gave you only two pieces here, the calculation is much simpler than you might have imagined. I only have to calculate the shared entropy between $A_1$ and $A_2$. But that is trivial given the density matrix $\rho_{A_1A_2}$. Whatever $A_1$ shows, $A_2$ shows also: every single piece of the measurement device agrees with every other piece. Pure bliss and harmony!

Except when you begin to understand that this kumbaya of understanding may have nothing at all to do with the state of the quantum system! They may all sing the same tune, but the melody can be false. Like I said before: measurement devices can lie to you, and I'll now proceed to show you that they must.

The pieces of the measurement device are correlated, all right. A quick look at the entropy Venn diagram will tell you as much:
 Fig. 1: Venn diagram of the entropies in the measurement device made by the pieces $A_1$ and $A_2$.
Here, the entropy $S$ is the logarithm of the number of states that the device can possibly take on. A simple example is a device that can take on only two states, in which case $S=1$ bit. You can also imagine a Venn diagram of a measurement device with more than two pieces. If it is more than five your imagination may become fuzzy. The main thing to remember when dealing with classical measurement devices is that each piece of the device is exactly like any other piece. Once you know the state of one part, you know the state of all other parts. The device is of "one mind", not several. 

But we know, of course, that the pieces by themselves are not really classical, they are quantum. How come they look classical? 

Let's look at the entire system from a quantum information-theoretic point of view, not just the measurement device. The Venn diagram in question, of a quantum system $Q$ measured by a classical system $A$ that has two pieces $A_1$ and $A_2$ is
Fig. 2: Venn diagram of entropies for the full quantum measurement problem: including the quantum 
system $A$ and two "pieces" of the measurement device $A$.
Now, that diagram looks a bit baffling, so let's spend some time with it. There are a bunch of minus signs in there for conditional entropies, but they should not be baffling you, because you should be getting used to them by now. Remember, $A$ is measuring $Q$. Let's take a look at what the entropy Venn diagram between $A$ and $Q$ looks like:
Fig. 3: Entropic Venn diagram for quantum system $Q$ and measurement device $A$
That's right, $Q$ and $A$ are perfectly entangled, because that is what the measurement operation does when you deal with quantum systems: it entangles. This diagram can be obtained in a straightforward manner from the joint diagram just above, simply by taking $A$ to be the joint system $A_1A_2$. Then, the conditional entropy of $A$ (given $Q$) is the sum of the three terms $-S$, $S$, and $-S$, the shared entropy is the sum of the three terms $S$, 0, and $S$, and so on. And, if you ignore $Q$ (meaning you don't look at it), then you get back the classically correlated diagram (0,$S$,0) for $A_1$ and $A_2$ you see in Fig. 1.

But how much does the measurement device $A$ know about the quantum system? 

From the entropy diagram above, the shared quantum entropy is $2S$, twice as much as the classical device can have! That doesn't seem to make any sense, and that is because the Venn diagram above has quantum entanglement $2S$, which is not the same thing as classical information. Classical information is that which all the pieces of the measurement device agree upon. So let's find out how much of that shared entropy is actually shared with quantum system. 

That, my friends, is given by the center of the triple Venn diagram above (Fig. 2). And that entry happens to be zero!

"Oh, well", you rumble, "that must be due to how you chose to construct this particular measurement!"

Actually, no. That big fat zero is generic, it will always be there in any tri-partite quantum entropy diagram where the joint system (system and measurement device combined) are fully known. It is a LAW.
"What law is that?", you immediately question.

That law is nothing other than the law that prevents quantum cloning! The classical device cannot have any information about the quantum system, because as I said in a previous post, quantum measurement is actually impossible! What I just showed you is the mathematical proof of that statement.
"Hold on, hold on!"
What?
"Measurement reveals nothing about the quantum state? Ever?"

I can see you're alarmed. But rest assured: it depends. If quantum system $Q$ and classical measurement device $A$ (composed of a zillion subsystems $A_1,....A_{\rm zillion}$) are the only things there (meaning they form a pure state together), then yeah, you can't know anything. This law is really a quite reasonable one: sometimes it is called the law of "monogamy of entanglement". 

Otherwise, that is, if a bunch of other systems exist so that, if you trace over them the state and the measurement device are mixed, then you can learn something from your measurement. 

"Back up there for a sec. Monogamy what?"

Monogamy of entanglement? That's just a law that says that if one system is fully entangled with another (meaning their entropy diagram looks like Fig. 3), then they can't share that entanglement in the same intimate manner with a third party. Hence the big fat zero in the center of the Venn diagram in Fig. 2. If you think just a little about it, it will occur to you that if you could clone quantum states, then entanglement would not have to be monogamous at all.

"A second is all it takes?"

I said a little. It could be a second. It could be a tad longer. Stop reading and start thinking. I't not a very hard quiz.

"Oh, right, because if you're entangled with one system, and then if you could make a clone..."

Go on, you're on the right track!

"...then I could take a partner of a fully entangled pair, clone it, and then transform its partner into a new state (because if you clone the state you got to take its partner with it) so that the cloned state is entangled with the new state. Voilà, no more monogamy."

OK you, get back to your seat. You get an A.

So we learn that if a measurement leaves the quantum system in a state as depicted in Fig. 2, then nothing at all can be learned about the quantum state. But this is hardly if ever the case. In general, there are many other systems that the quantum system $A$ is entangled with. And if we do not observe them, then the joint system $QA$ is not pure, meaning that the joint entropy does not vanish. And in that case, the center of the triple quantum Venn diagram does not vanish. And because that center tells you how much information you can obtain about the quantum system in a classical measurement, that means that you can actually learn something about the quantum system, after all. 

But keep in mind: if the joint system $QA$ does not have exactly zero entropy, it is just a little bit classical, for reasons I discussed earlier in this series. Indeed, the whole concept of "environment-induced superselection" (or einselection) advocated by the quantum theorist Wojciech Zurek hinges precisely on such additional systems that interact with $QA$ (the "environment"), and that are being ignored ("traced over"). They "classicize" the quantum system, and allow you to extract some information about it. 

I do realize that "classicize" is not a word. Or at least, not a word usually used in this context.  Not until I gave it a new meaning, right?

With this, dear patient reader, I must end this particular post, as I have exceeded the length limit of single blog posts (a limit that I clearly have not bothered with in any of my previous posts). I know you are waiting for Schrödinger's cat. I'll give you that, along with your run-of-the-mill beam splitter, as well as the not-so-run-of-the-mill quantum eraser, in Part 6.

"Quantum eraser? Is that some sort of super hero, or are you telling me that you can reverse a quantum measurement?" 

Oh, the quantum eraser is real. The super hero, however, is Marlan Scully, and you'll learn all about it in Part 6.




Tuesday, October 7, 2014

Nifty papers I wrote that nobody knows about (Part 4: Complex Langevin equation)

This is the last installment of the "Nifty Papers" series. Here are the links to Part1, Part2, and Part 3.

For those outside the computational physics community, the following words don't mean anything: 


For those others that have encountered the problem, these words elicit terror. They stand for sleepless nights. They spell despair. They make grown men and women weep helplessly. The Sign Problem. 

OK, I get it, you're not one of those. So let me help you out.

In computational physics, one of the main tools people use to calculate complicated quantities is the Monte Carlo method. The method relies on random sampling of distributions in order to obtain accurate estimates of means. In the lab where I was a postdoc from 1992-1995, the Monte Carlo methods was used predominantly to calculate the properties of nuclei, using a shell model approach. 

I can't get into the specifics of the Monte Carlo method in this post, not the least because such an exposition would loose me a good fraction of what viewers/readers I have left at this point. Basically, it is a numerical method to calculate integrals (even though it can be used for other things too). It involves sampling the integrand and summing the terms. If the integrand is strongly oscillating (lots of high positives and high negatives), then the integral may be slow to converge. Such integrals appear in particular when calculating expectation values in strongly interacting systems, such as for example big nuclei. And yes, the group I had joined as a postdoc at that point in my career specialized in calculating properties of large nuclei computationally using the nuclear shell model. These folks would battle the sign problem on a daily basis.  

And while as a Fairchild Prize Fellow (at the time it was called the "Division Prize Fellowship", because Fairchild did not at that time want their name attached) I could work on anything I wanted (and I did!), I also wanted to do something that would make the life of these folks a little easier. I decided to try to tackle the sign problem. I started work on this problem in the beginning of 1993 (the first page of my notes reproduced below is dated February 9th, 1993, shortly after I arrived).

The last calculation, pages 720-727 of my notes, is dated August 27th, 1999, so I clearly took my good time with this project! Actually, it lay dormant for about four years as I worked on digital life and quantum information theory. But my notes were so detailed that I could pick the project back up in 1999.


The idea to use the complex Langevin equation to calculate "difficult" integrals is not mine, and not new (the literature on this topic goes back to 1985, see the review by Gausterer [1]). I actually had the idea without knowing these papers, but this is neither here nor there. I was the first to apply the method to the many-fermion problem, where I also was able to show that the complex Langevin (CL) averages converge reliably. Indeed, the CL method was, when I began working on it, largely abandoned because people did not trust those averages. But enough of the preliminaries. Let's jump into the mathematics.

Take a look at the following integral:

$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}\cos(\sigma z).$$
This integral looks very much like the Gaussian integral
$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}=1,$$
except for that cosine function. The exact result for the integral with the cosine function is (trust me there, but of course you can work it out yourself if you feel like it) 
$$e^{-(1/2)z^2}.$$
This result might surprise you, as the integrand itself (on account of the cos function) oscillates a lot:
The integrand $\cos(10x)e^{-1/2 x^2}$
The factor $e^{-(1/2) \sigma^2}$ dampens these oscillations, and in the end the result is simple: It is as if the cosine function wasn't even there, and just replaces $\sigma$ by $z$.  But a Monte Carlo evaluation of this integral runs into the sign problem when $z$ gets large and the oscillations become more and more violent. The numerical average converges very very slowly, which means that your computer has to run for a very long time to get a good estimate.

Now imagine calculating an expectation value where this problem occurs both in the numerator and the denominator. In that case, we have to deal with small but weakly converging averages both in the numerator and denominator, and the ratio converges even more slowly. For example, imagine calculating the "mean square"
The denominator of this ratio (for $N=1$) is the integral we looked at above. The numerator just has an extra $\sigma^2$ in it. The $N$ ("particle number") is there to just make things worse if you choose a large one, just as in nuclear physics larger nuclei are harder to calculate. I show you below the result of calculating this expectation value using the Monte Carlo approach (data with error bars), along with the analytical exact result (solid line), and as inset the average "sign" $\Phi$ of the calculation. The sign here is just the expectation value of
$$\Phi(z)=\frac{\cos(\sigma z)}{|\cos(\sigma z)|}$$

You see that for increasing $z$, the Monte Carlo average becomes very noisy, and the average sign disappears. For a $z$ larger than three, this calculation is quite hopeless: sign 1, Monte Carlo 0.

I want to make one thing clear here: of course you would not use the Monte Carlo method to calculate this integral if you can do it "by hand" (as you can for the example I show here). I'm using this integral as a test case, because the exact result is easy to get. The gist is: if you can solve this integral computationally, maybe you can solve those integrals for which you don't know the answer analytically in the same manner. And then you solve the sign problem. So what other methods are there?

The solution I proposed was using the complex Langevin equation. Before moving to the complex version (and why), let's look at using the real Langevin equation to calculate averages. The idea here is the following. When you calculate an integral using the Monte Carlo approach, what you are really doing is summing over a set of points that are chosen such that you reject (probabilistically) those that are not close to the integrand--and you accept those that are close, again probabilistically, which creates a sequence of random samples that approximates the probability distribution that you want to integrate. 

But there are other methods to create sequences that appear to be drawn from a given probability distribution. One is the Langevin equation which I'm going to explain. Another is the Fokker-Planck equation, which is related to the Langevin equation but that I'm not going to explain. 

Here's the theory (not due to me, of course), on how you use the Langevin equation to calculate averages. Say you want to calculate the expectation value of a function $O(\sigma)$. To do that, you need to average $O(\sigma)$, which means you sum (and by that I mean integrate), this function over the probability that you find $\sigma$. The idea here is that $\sigma$ is controlled by a physical process: $\sigma$ does not change randomly, but according to some laws of physics. You want to know the average $O$, which depends on $\sigma$, given that $\sigma$ changes according to some natural process.

If you think about it long enough, you realize that many many things in physics boil down to calculating averages just like that. Say, the pressure at room temperature given that the molecules are moving according to the known laws of physics. Right, almost everything in physics, then. So you see, being able to do this is important. Most of the time, Monte Carlo will serve you just fine. We are dealing with all the other cases here. 

First, we need to make sure we capture the fact that the variable $\sigma$ changes according to some physical law. When you are first exposed to classical mechanics, you learn that the time development of any variable is described by a Lagrangian function (and then you move on to the Hamiltonian so that you are prepared to deal with quantum mechanics, but we won't go there here). The integral of the Langrangian is called the "action" $S$, and that is the function that is used to quantify how likely any variable $\sigma$ is given that it follows these laws. For example, if you are particle following the laws of gravity, then I can write down for you the Lagrangian (and hence the action) that makes sure the particles follow the law. It is $L=-\frac12m v^2+mV(\sigma)$, where $m$ is the mass, and $v$ is the velocity of the $\sigma$ variable, $v=d\sigma/dt$,  and $V(\sigma)$ is the gravitational potential.

The action is $S=\int dt L(\sigma(t)) dt$, and the equilibrium distribution of $\sigma$ is
$$P(\sigma)=\frac1Z e^{-S}$$ where $Z$ is the partition function $Z=\int e^{-S}d\sigma$.

In computational physics, what you want is a process that creates this equilibrium distribution, because if you have it, then you can just sum over the variables so created and you have your integral. Monte Carlo is one method to create that distribution. We are looking for another. 

It turns out that the Langevin equation
$$\frac{d\sigma}{dt}=-\frac12 \frac{dS}{d\sigma}+\eta(t)$$
creates precisely such a process. Here, $S$ is the action for the process, and $\eta(t)$ is a noise term with zero mean and unit variance:
$$\langle \eta(t)\eta(t^{\prime})\rangle=\delta(t-t^\prime).$$
Note that $t$ here is a "ficitious" time: we use it only to create a set of $\sigma$s that are distributed according to the probability distribution $P(\sigma)$ above. If we have this ficitious time series $\sigma_0$ (the solution to the differential equation above), then we can just average the observable $O(\sigma)$:
$$\langle O\rangle=\lim_{T\to\infty}\frac1T\int_0^T\sigma_0(t)dt$$
Let's try the "Langevin approach" to calculating averages on the example integral $\langle \sigma^2\rangle_N$ above. The action we have to use is
$$S=\frac12 \sigma^2-N\ln [\cos(\sigma z)]$$ so that $e^{-S}$ gives exactly the integrand we are looking for. Remember, all expectation values are calculated as
$$\langle O\rangle=\frac{\int O(\sigma) e^{-S(\sigma)}d\sigma}{\int e^{-S(\sigma)}d\sigma}.$$

With that action, the Langevin equation is
$$\dot \sigma=-\frac12(\sigma+Nz\tan(\sigma z))+\eta \ \ \ \      (1)$$
This update rule creates a sequence of $\sigma$ that can be used to calculate the integral in question.

And the result is ..... a catastrophe! 

The average does not converge, mainly because in the differential equation (1), I ignored a drift term that goes like $\pm i\delta(\cos(z\sigma))$. That it's there is not entirely trivial, but if you sit with that equation a little while you'll realize that weird stuff will happen if the cosine is zero. That term throws the trajectory all over the place once in a while, giving rise to an average that simply will not converge.

In the end, this is the sign problem raising its ugly head again. You do one thing, you do another, and it comes back to haunt you. Is there no escape?

You've been reading patiently so far, so you must have suspected that there is an escape. There is indeed, and I'll show it to you now.

This simple integral that we are trying to calculate
$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}\cos(\sigma z),$$
we could really write it also as
$$\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty d\sigma e^{-(1/2)\sigma^2}e^{iz},$$
because the latter integral really has no imaginary part. Because the integral is symmetric. 

This is the part that you have to understand to appreciate this article. And as a consequence this blog post.  If you did, skip the next part. It is only there for those people that are still scratching their head.

OK: here's what you learn in school: $e^{iz}=\cos(z)+i\sin(z)$. This formula is so famous, it even has its own name. It is called Euler's formula. And $\cos(z)$ is a symmetric function (it remains the same if you change $z\to-z$), while $\sin(z)$ is anti-symmetric ($\sin(-z)=-\sin(z)$). An integral from $-\infty$ to $\infty$ will render any asymmetric function zero: only the symmetric parts remain. Therefore, $\int_{-\infty}^\infty e^{iz}= \int_{-\infty}^\infty \cos(z)$. 

This is the one flash of brilliance in the entire paper: that you can replace a cos by a complex exponential if you are dealing with symmetric integrals. Because this changes everything for the Langevin equation (it doesn't do that much for the Monte Carlo approach). The rest was showing that this worked also for more complicated shell models of nuclei, rather than the trivial integral I showed you. Well, you also have to figure out how to replace oscillating functions that are not just a cosine, (that is, how to extend arbitrary negative actions into the complex plane) but in the end, it turns out that this can be done if necessary.

But let's first see how this changes the Langevin equation. 

Let's first look at the case $N=1$. The action for the Langevin equation was 
$$S=\frac12\sigma^2-\log\cos(\sigma z)$$
If you replace the cos, the action instead becomes
$$S=\frac12\sigma^2\pm i\sigma z.$$ The fixed point of the differential equation (1), which was on the real line and therefore could hit the singularity $\delta(\cos(z\sigma))$, has now moved into the complex plane. 

And in the complex plane there are no singularities! Because they are all on the real line! As a consequence, the averages based on the complex action should converge! The sign problem can be vanquished just by moving to the complex Langevin equation!

And that explains the title of the paper. Sort of. In the figure below, I show you how the complex Langevin equation fares in calculating that integral that, scrolling up all the way, gave rise to such bad error bars when using the Monte Carlo approach. And the triangles in that plot show the result of using a real Langevin equation. That's the catastrophe I mentioned: not only is the result wrong. It doesn't even have large error bars, so it is wrong with conviction! 

The squares (and the solid line) come from using the extended (complex) action in the Langevin equation. It reproduces the exact result precisely.


Average calculated with the real Langevin equation (triangles) and the complex Langevin equation (squares), as a function of the variable $z$. The inset shows the "sign" of the integral, which still vanishes at large $z$ even as the complex Langevin equation remains accurate.
The rest of the paper is somewhat anti-climactic. First I show that the same trick works in a quantum-mechanical toy model of rotating nuclei (as opposed to the trivial example integral). I offer the plot below from the paper as proof:
Solid line is exact theory, symbols are my numerical estimates. You've got to hand it to me: Complex Langevin rules.

But if you want to convince the nuclear physicists, you have to do a little bit more than solve a quantum mechanical toy model. Short of solving the entire beast of the nuclear shell model, I decided to tackle something in between: the Lipkin model (sometimes called the Lipkin-Meshkov-Glick model), which is a schematic nuclear shell model that is able to describe collective effects in nuclei. And the advantage is that exact analytic solutions to the model exist, which I can use to compare my numerical estimates to.

The math for this model is far more complicated and I spare you the exposition for the sake of sanity here. (Mine, not yours). A lot of path integrals to calculate. The only thing I want to say here is that in this more realistic model, the complex plane is not entirely free of singularities: there are in fact an infinity of them. But they naturally lie in the complex plane, so a random trajectory will avoid them almost all of the time, whereas you are guaranteed to run into them if they are on the real line and the dynamics return you to the real line without fail. That is, in a nutshell, the discovery of this paper. 

So, this is obviously not a well-known contribution. This is a bit of a bummer, because the sign problem still very much exists, in particular in lattice gauge theory calculations of matter at finite chemical potential (meaning, at finite density). Indeed, a paper came out just recently (see the arXiv link in case you ended up behind a paywall) where the authors try to circumvent the sign problem in lattice QCD at finite density by doing the calculations explicitly at high temperature using the old trick of doubling your degrees of freedom. Incidentally, this is the same trick that gives you black holes at Hawking temperature, because the event horizon naturally doubles degrees of freedom. I used this trick a lot when calculating Feynamn diagrams in QCD at finite temperature. But that's a fairly well-known paper, so I can't discuss it here. 

Well, maybe some brave soul one day rediscovers this work, and  writes a "big code" that solves the problem once and for all using this trick. I think the biggest reason why this paper never got any attention is that I don't write big code. I couldn't apply this to a real-world problem, because to do that you need mad software engineering skills. And I don't have those, as anybody who knows me will be happy to tell you. 

So there this work lingers. Undiscovered. Lonely. Unappreciated. Like sooo many other papers by sooo many other researchers over time. If only there was a way that old papers like that could get a second chance! If only :-)

[1] H. Gausterer, Complex Langevin: A numerical Method? Nuclear Physics A 642 (1998) 239c-250c.
[2] C. Adami and S.E. Koonin, Complex Langevin equation and the many-fermion problem. Physical Review C 63 (2001) 034319. 













Friday, October 3, 2014

Nifty papers I wrote that nobody knows about: (Part 3: Non-equilibrium Quantum Statistical Mechanics)

This is the third part of the "Nifty Papers" series. Link to Part 1. Link to Part 2.

In 1999, I was in the middle of writing about quantum information theory with my colleague Nicolas Cerf. We had discovered that quantum conditional entropy can be negative, discussed this finding with respect to the problem of quantum measurement, separability, Bell inequalities, as well as the capacity of quantum channels. Heady stuff, you might think. But we were still haunted by Hans Bethe's statement to us that the discovery of negative conditional entropies would change the way we perceive quantum statistical physics. We had an opportunity to write an invited article for a special issue on Quantum Computation in the journal "Chaos, Solitons, and Fractals", and so we decided to take a shot at the "Quantum Statistical Mechanics" angle.

Because I'm writing this blog post in the series of "Articles I wrote that nobody knows about", you already know that this didn't work out as planned. 

Maybe this was in part because of the title. Here it is, in all its ingloriousness:
C. Adami & N.J. Cerf, Chaos Sol. Fract. 10 (1999) 1637-1650
There are many things that, in my view, conspired to this paper being summarily ignored. The paper has two citations for what it's worth, and one is a self-citation! 

There is, of course, the reputation of the journal to blame. While this was a special issue that put together papers that were presented at a conference (and which were altogether quite good), the journal itself was terrible as it was being ruled autocratically by its editor Mohammed El Naschie, who penned and published a sizable fraction of the papers appearing in the journal (several hundred, in fact). A cursory look at any of these papers shows him to be an incurable (but certainly self-assured) crackpot, and he was ultimately fired from his position by the publisher, Elsevier. He's probably going to try to sue me just for writing this, but I'm trusting MSU has adequate legal protection for my views. 

There is, also, the fairly poor choice of a title. "Prolegomena?" Since nobody ever heard of this article, I never found anyone who would, after a round of beers, poke me in the sides and exclaim "Oh you prankster, choosing this title in hommage to the one by Tom Banks!" Because there is indeed a paper by Tom Banks (a string theorist) entitled: "Prolegomena to a theory of bifurcating universes: A nonlocal solution to the cosmological constant problem or little lambda goes back to the future".  Seriously, it's a real paper, take a look:


For a reason that I can't easily reconstruct, at the time I thought this was a really cool paper. In hindsight it probably wasn't, but it certainly has been cited a LOT more often than my own Prolegomena. That word, by the way, is a very innocent greek word meaning: "An introduction at the start of a book". So I meant to say: "This is not a theory, it is the introduction to something that I would hope could one day become a theory". 

There is also the undeniable fact that I violated the consistency of singular/plural usage, as "a" is singular, and "Statistical Mechanics" is technically plural, even though it is never used in the singular.

Maybe this constitutes three strikes already. Need I go on?

The paper begins with a discussion of the second law of thermodynamics, and my smattering of faithful readers has read my opinions about this topic before. My thoughts on the matter were born around that time, and this is indeed the first time that these arguments were put in print. It even has the "perfume bottle" picture that also appears in the aforementioned blog post.

Now, the arguments outlined in this paper concerning the second law are entirely classical (not quantum), but I used them to introduce the quantum information-theoretic considerations that followed, because the main point was that for the second law, it is a conditional entropy that increases. And it is precisely the conditional entropy that is peculiar in quantum mechanics, because it can be negative. So in the paper I'm writing about I first review that fact, and then show that the negativity of conditional quantum entropy has interesting consequences for measurements on Bell states. The two figures of the Venn diagrams of same-spin as opposed to orthogonal-spin measurements is reproduced here:

What these quantum Venn diagrams show is that the choice of measurement to make on a fully entangled quantum state $Q_1Q_2$ will determine the relative state of the measurement devices (perfect correlation in the case of same-direction spin measurements, zero correlation in the case of orthogonal measurements), but the quantum reality is that the measurement devices are even more strongly entangled with the quantum system in the case of the orthogonal measurement, even though they are not correlated at all with each other. Which goes to show you that quantum and physical reality can be two entirely different things altogether.

I assure you these results are profound, and because this paper is essentially unknown, you might even try to make a name for yourself! By, umm, citing this one? (I didn't encourage you to plagiarize, obviously!)

So what comes after that? After that come the Prolegomena of using quantum information theory to solve the black hole information paradox!

This is indeed the first time that any of my thoughts on black hole quantum thermodynamics appear in print. And if you compare what's in this paper with the later papers that appeared first in preprint form in 2004, and finally in print in 2014, the formalism in this manuscript seems fairly distinct from these calculations.

But if you look closer, you will see that the basic idea was already present there.

The way I approach the problem is clearly rooted in quantum information theory. For example, people often start by saying "Suppose a black hole forms from a pure state". But what this really means is that the joint state between the matter and radiation forming the black hole, as well as the radiation that is being produced at the same time (which does not ultimately become the black hole) is in a pure state. So you have to describe the pure state in terms of a quantum Venn diagram, and it would look like this:
Entropy Venn diagram between the forming black hole ("proto-black hole" PBH) and a radiation field R. The black hole will ultimately have entropy $\Sigma$, the entanglement entropy.
Including this radiation field R entangled with the forming black hole is precisely the idea of stimulated emission of radiation that ultimately would solve all the black hole information paradoxes: it was clear to me that you could not form a black hole without leaving an entangled signature behind. I didn't know at the time that R was stimulated emission, but I knew something had to be there. 

Once the black hole is formed, it evaporates by the process of Hawking radiation. During evaporation, the black hole becomes entangled with the radiation field R' via the system R:
Entropy Venn diagram between radiation-of-formation R, the black hole BH, and the Hawking radiation R'. Note that the entropy of the black hole $S_{\rm BH}$ is smaller than the entropy-of-formation $\Sigma$ by $\Delta S$, the entropy of the Hawking radiation. 
The quantum entropy diagram of three systems is characterized by three (and exactly three) variables, and the above diagram was our best bet at this diagram. Note how the entire system has zero entropy and is highly entangled, but when tracing out the radiation-of-formation, the black hole is completely uncorrelated with the Hawking radiation as it should be. 

Now keep in mind, this diagram was drawn up without any calculation whatsoever. And as such, it is prone to be dismissed as a speculation, and it was without doubt a speculation at the time. Five years later I had a calculation, but its acceptance would have to wait for a while.

In hindsight, I'm still proud of this paper. In part because I was bold enough to pronounce the death of the second law as we know it in print, and in part because it documents my first feeble attempts to make sense of the black hole information morass. This was before I had made any calculations in curved space quantum field theory, and my ruminations can therefore easily be dismissed as naive. They were naive (for sure), but not necessarily stupid.

Next week, be prepared for the last installment of the "Nifty Papers" series. The one where I single-handedly take on the bugaboo of computational physics: the "Sign Problem". That paper has my postdoctoral advisor Steve Koonin as a co-author, and he did provide encouragement and helped edit the manuscript. But by and large, this was my first single-author publication in theoretical/computational physics. And the crickets are still chirping....






Sunday, September 28, 2014

Nifty papers I wrote that nobody knows about: (Part 2: Quark-Gluon Plasma)

This is Part 2 of the "Nifty papers" series, talking about papers of mine that I think are cool, but that have been essentially ignored by the community. Part 1 is here.

This is the story of my third paper, still as a graduate student (in my third year) at Stony Brook University, on Long Island.

Here's the title:

Physics Letters B 217 (1989) 5-8
First things first: What on earth is "Charmonium"? To answer this question, I'll give you in what follows a concise introduction to the theory of quarks and gluons, knows as "Quantum Chromodynamics".

Just kidding, of course. The Wiki article I linked above should get you far enough for the purposes of this article right here. But if this is TL;DR for you, here's the synopsis:

There are exactly six quarks in this universe, up (u), down (d), strange (s), bottom (b, also sometimes 'beauty"), and t (top).

These are real, folks. Just because they have weird names doesn't mean you don't carry them in every fibre of your body. In fact you carry only two types of quarks with you, really: the u and d, because they make up the protons and neutrons that make all of you: proton=uud, neutron=udd: three quarks for every nucleon. 

The s, c, and b, quarks exist only to annoy you, and provide work for high-energy experimental physicists! 

Just kidding again, of course. The fact that they (s,c, and b) exist provides us with a tantalizing clue about the structure of matter. As fascinating as this is, you and I have to move on right now. 

For every particle, there's an anti-particle. So there have to be anti-u, and anti-d. They make up almost all of anti-matter. You did know that anti-matter was a real thing, not just an imagination of Sci-Fi movies, right?

The particles that make up all of our known matter (and energy). The stuff that makes you (and almost all of our known universe) is in the first and 4th column. I'm still on the fence about the Higgs. It doesn't look quite right in this graph, does it?  Kind of like it's a bit of a mistake? Or maybe because it really is a condensate of top quarks? Source: Wikimedia
Right. You did. Good thing that. So we can move on then. So if we have u and d, we also must have anti-u and anti-d. And I'm sure you already did the math on charge to figure out that the charge of u better be +2/3, and the charge of d is necessarily -1/3. Because anti-matter has anti-charge, duh. If you're unsure why, contemplate the CPT theorem. 

Yes, quarks have fractional charges. If this blows your mind, you're welcome. And this is how we make one positive charge for the proton (uud), and a neutral particle (the neutron) from udd.

But the tinkerer in you has already found the brain gears in motion: what prevents me from making a (u-anti u), (d-anti d), (u-anti d), (d-anti u ) etc. ?

The answer is: nothing. They exist. (Next time, come up with this discovery when somebody else has NOT YET claimed a Nobel for it, OK?) These things are called mesons. They are important. I wrote my very first paper on the quantization of an effective theory that would predict how nucleons (you remember: protons and neutrons, meaning "we-stuff") interact with pions (a type of meson made only of u,d, anti-u, and anti-d), as discussed here.

What about the other stuff, the "invisible universe" made from all the other quarks, like strange, charm bottom, and top? Well, they also form all kinds of baryons (the world that describes all the heavy stuff, such as protons and neutrons) and mesons (the semi-heavy stuff). But they tend to decay real fast.

But one very important such meson--both in the history of QCD and our ability to understand it-- is the meson called "charmonium".

I did tell yout that it would take me a little time to get you up to date, right? Right. So, Charmonium is a bound state of the charm and anti-charm quark.

(By the way, if there is anybody reading this that still thinks: "Are you effing kidding me with all these quark names and stuff, are they even real?", please reconsider these thoughts, because they are like doubting we landed on the moon. We did, and there really are six quarks, and six anti-quarks. We are discussing their ramifications here. Thank you. Yes, "ramification" is a real word, that's why I linked to a Dictionnary. Yes, those Wiki pages on science are not making things up. Now, let's move on, shall we?)

The reason why we call the ${\rm c}\bar {\rm c}$ meson "charmonium" is because we have a name for the bound state of the electron and positron (also known as the anti-electron): we call it Positronium. Yes, that's a real thing. Not related to the Atomium, by the way. That's a real building, but not a real element.

So why is charmonium important? To understand that, we have to go back to the beginning of the universe.

No, we don't have to do it by time travel. Learning about charmonium might allow you to understand something about what was going on when our universe was really young. Like, less than a second young. Why would we care about these early times? Because they might reveal so us clues about the most fundamental laws of nature. Because the state of matter in the first few milliseconds (even the first few microseconds) might have left clues to decipher for us today.

At that time (before a millisecond), our universe was very different from how we see it today. No stars, no solar systems. We didn't even have elements. We didn't have nuclei! What we had was a jumble of quarks and gluons, which one charitable soul dubbed the "quark gluon plasma" (aka: QGP). The thing about a plasma is that positive and negative charges mill about unhindered, because they have way too much energy to form these laid-back bound states that we might (a few milliseconds later) find everywhere. 

So, here on Earth, people have been trying to recreate this monster of a time when the QGP reigned supreme, by shooting big nuclei onto other big nuclei. The idea here is that, for a brief moment in time, a quark gluon plasma would be formed that would allow us to study the properties of this very very early universe first hand. Make a big bang at home, so to speak. Study hot and dense matter.

While contemplating such a possibility at the RHIC collider in Brookhaven, NY (not far from where I was penning the paper I'm about to talk to you about), a pair of physicists (Tetsuo Matsui and Helmut Satz [1]) speculated that charmonium (you know, the $\bar c c$ bound state) might be seriously affected by the hot plasma. In the sense that you could not see the charmonium anymore.

Now, for ordinary matter, the $J/\psi$ (as the lowest energy state of the charmonium system is called for reasons I can't get into) has well known properties. It has a mass of 3.1 GeV (I still know this by heart), and a short but measureable lifetime. Matsui and Satz in 1986 speculated that this $J/\psi$ would look very different if it was born in the midst of a quark gluon plasma, and that this would allow us to figure out whether such a state of matter was formed: all you have to do is measure the $J/\psi$'s properties: if it is much reduced in appearance (or even absent), then we've created a quark gluon plasma in the lab.

It was a straightfoward prediction that many people accepted. The reason why the $J/\psi$ would disappear in a QGP according to Matsui and Satz was the phenomenon of "color screening". Basically, the energy of the collision would create so many additional $\bar c c$ pairs that they would provide a "screen" to the formation of a meson. It is as if a conversation shouted over long distances is disrupted by a bunch of people standing in between, whispering to each other. 

For a reason I cannot remember, Ismail Zahed and I came to doubt this scenario. We were wondering whether it was really the "hotness" of the plasma (creating all these screening pairs) that destroyed the $J/\psi$. Could it instead be destroyed even if a hot plasma was not formed?

Heavy ion collsision in the rest system of the target (above), and in the center of mass rest system (below)

The image we had in our heads was the following. When a relativistically accelerated nucleus hits another nucleus, then in the reference frame of the center of mass of both nuclei each is accelerated (from this center of mass, you see two nuclei coming at you at crazy speed). And when two nuclei are accelerated relativistically, their lateral dimension (in the direction of movement) contracts, while the orthogonal direction remains unchanged. This means that the shapes of the nuclei are affected: instead of spherical nuclei they appear to be squeezed, as the image above suggests.

When looked at from this vantage point, a very different point of view concerning the disappearance of the $J/\psi$ can be had. Each of the nuclei creates around it a colorelectric and color magnetic field, because of all the gluons exchanged between the flat nuclei. Think of it in terms of electrodynamics as opposed to color dynamics: if the two nuclei would be electrical conductors, they would span between them an electric field. Indeed, a set of conducting plates separated by a small distance is a capacitor. So, could it be that in such a collision, instead of all that hot screening, all that happens is the formation of a colorelectric capacitor that simply rips the $J/\psi$ to pieces?

That's the question we decided to check, by doing an old-fashioned calculation. How do you do this? I recall more or less that if I am going to calculate the fate of a bound state within a colorelectric field, I ought to know how to calculate the fate of a bound state in an electric field. Like, for example: who calculated what happens to the hydrogen atom if it is placed between the plates of a capacitor? Today, I would just google the question, but in 1988, you have to really search. But after searching (I spent a lot of time in the library these days) I hit paydirt. A physicist by the name of Cornel Lanczos had done exactly that calculation (the fate of a hydrogen atom in a strong electric field). What he showed is that in strong electric fields, the electron is ripped off of the proton, leading to the ionization of the hydrogen atom.

This was the calculation I was looking for! All I had to do is change the potential (namely the standard Coulomb potential of electrodynamics, and replace it by a the effective potential of quantum chromodynamics.

Now, both you and I know that we if don't have a potential, then we can't calculate the energy of the bound state. And the potential for color-electric flux tubes (as opposed to the exchange of photons, which gives rise to the electromagnetic forces as we all know) ought to be notably different from the Coulonb potential. 

No, I'm not known to be sidetracked by engaging in a celebration of the pioneers of quantum mechanics. But the career of Lanczos should give you pause. The guy was obviously brilliant (another one of the Hungarian diaspora) but he is barely remembered now. Spend some time with his biography on Wikipedia: there are others besides Schrödinger, Heisenberg, Planck, and Einstein that advanced our understanding of not just physics, but in the case of Lanczos, computational physics as well.

So I was sidetracked after all. Moving on. So, I take Lanczos's calculation, and just replace the Coulomb potential by the color-electric potential. Got it? 

Easier said than done! The Coulomb potential is, as everybody knows, $V(r)=-\frac er$. The color-electric potential is (we decided to ignore color magnetic forces for reasons that I don't fully remember, but that made perfect sense then)
$$V(r)=-\frac43\frac{\alpha_s}r+\sigma r.      (1)$$

"All right", you say, "what's all this about?"

Well, I respond, you have to understand that when it comes to color-electric (rather than just electric) effects, the coupling constant is not the electron charge, but 4/3 of the strong coupling constant $\alpha_s$.
"But why 4/3?"

Ok, the 4/3 is tricky. You don't want to see this, trust me. It's not even in the paper. You do want to see it? OK. All others, skip the colored text.

How to obtain the quark-antiquark Coulomb potential
To calculate the interquark potential you have to take the Fourier transform of the Feynman diagram of quark-anti-quark scattering:

The solid lines are quarks or anti-quarks, and the dashed line is the gluon exchanged between them. Because the gluon propagator $D^{-1}_{ab}$ is diagonal, the amplitude of the process is given mostly by the expectation value of $\vec T^2$. $T^{(a)}$ is the generator of the symmetry group of quarks SU(3), given by $T^{(a)}=\frac12\lambda^a$. And $\lambda^a$ is a Gell-Mann matrix. There are eight of them. What the value of $\langle \vec T^2\rangle$ is depends on the representation the pair of quarks is in. A little calculation shows that for a quark-antiquark pair in a singlet state, $\langle \vec T^2\rangle=-4/3$. If the pair is in an octet state, this same expectation value gives you 1/6, meaning that the octet is unbound. 

More interesting than the Coulomb term is the second term in the potential  (1), the one with the wavy thing in it. Which is called a "sigma", by the way.

"What of it?"

Well, $\sigma$ is what is known as the "string tension". As I mentioned earlier, quarks and anti-quarks can't just run away from each other (if you gave them enough energy). In strong interactions, the force between a quark and an anti-quark increases in proportion to their distance. In the lingo of strong interaction physics, this is called "asymptotic freedom", because it means that at short distances, quarks get to be free. Not so if they attempt to stray, I'm afraid.

So suppose we insert this modified potential, which looks just like a Coulomb potential but has this funny extra term, into the equations that Lanczos wrote down to show that the hydrogen atoms gets ripped apart by a strong electric field?

Well, what happens is that (after a bit of a song and dance that you'll have to read about by yourself), it turns out that if the color-electric field strength just marginally larger than the string tension $\sigma$, then this is sufficient to rip apart the charmonium bound state. Rip apart, as in disintegrate. The color-electric field generated by these colliding nuclei will kill the charmonium, but it is not because a hot quark gluon plasma creates a screening effect, it is because the cold color-electric field rips the bound state apart!

The observable result of these two very different mechanisms might look the same, though: the number of $J/\psi$ particles you would expect is strongly reduced.

So what have we learned here? One way to look at this is to say that a smoking gun is only a smoking gun if there are no other smoking guns nearby. A depletion of $J/\psi$s does not necessarily signal a quark gluon plasma.

But this caveat went entirely unheard, as you already know because otherwise I would not be writing about this here. Even though we also published this same paper as a conference proceeding, nobody wanted to hear about something doubting the holy QGP.

Is the controversy resolved today? Actually, it is still in full swing, almost 30 years after the Matsui and Satz paper [1], and 25 years after my contribution that was summarily ignored. How can this still be a mystery? After all, we have had more and more powerful accelerators attempt to probe the elusive QGP. At first it was CERN's SPS, followed by RHIC in Brookhaven (not far away from where I wrote the article in question). And after RHIC, there was the LHC, which after basking in the glory of the Higgs discovery needed something else to do, and turned its attention to.... the QGP and $J/\psi$ suppression!

The reason why this is not yet solved is that the signal of $J/\psi$ suppression is tricky. What you want to do is compare the number of $J/\psi$ produced in a collsion of really heavy nuclei (say, lead on lead) with those produced when solitary protons hit other protons, scaled by the number of nucleons in the lead-on-lead collision. Except that in the messy situation of lead-on-lead, $J/\psi$ can be produced at the edge rather than the center, be ripped apart, re-form, etc. Taking all these processes into account is tedious and messy.

So the latest news is: yes, $J/\psi$ is suppressed in these collisions. But whether it is due to "color-screening" as the standard picture of the QGP suggests, or whether it is because a strong color-electric field rips apart these states (which could happen even if there is no plasma present at all as I have shown in the paper you can download from here), this we do not yet know. After all this time.

[1] T. Matsui and H. Satz, “J/ψ Suppression by Quark-Gluon Plasma Formation,” Phys. Lett. 178 (1986) 416.

Now, move over to Part 3, where I awkwardly explain the meaning of the word "Prolegomena".