Eqs

Friday, April 4, 2014

The Quantum Cloning Wars Revisited

Cloning isn't so much in the news anymore these days, as the novelty of Dolly has worn off and cloning of farm animals and pets has become common place. But a different form of cloning--the cloning of quantum information, that is--is still very much discussed. 
Dolly the cloned sheep. (Pining for the fjords)
There are a number of fundamental rules that we abide by in quantum physics. That probabilities are given by the square of the wave function's amplitude, for example (Born's rule). Or, that quantum interference is destroyed if you try to know what's really goin' on (the lesson of the double-slit experiment).

Or, that quantum physics is linear.

It really is. What this means is that if you have quantum wavefunctions $\psi$ and $\phi$ and an operator $U$ acting on the sum of them, then you get

$U(\psi+\phi)=U\psi +U\phi$.

Pretty innocuous-looking, right? But the consequences of this little harmless statement are mighty. This linearity implies that quantum states cannot be cloned. They can't be Dolly. They can't be Xeroxed. Thou shalt not clone quantum states. How is linearity going to legislate that?

Well, let's first state what cloning of a quantum state would look like. Could we clone a state such as $\phi$? 

(I don't have to tell you what $\phi$ is to answer the question, as you will understand in a minute-and-a-half). 

If you know what $\phi$ is, then yes, you can clone this state! (Wait for the apparent contradiction to be resolved before firing off your email).

Cloning in quantum mechanics means that if you start with a state $\phi$, then after cloning you have the state $\phi\phi$: two copies of $\phi$. It is actually possible to design an operator $U$ that does precisely this:

$U\phi 0=\phi\phi$

(What happens here is that the second state "0" is turned into $\phi$ by $U$. $U$ literally measures $\phi$, and then uses this knowledge of what $\phi$ is to turn 0 into a copy of that $\phi$. So you can clone any known state.

"What do you mean by 'known'?"

Well you see, for $U\phi$ to be $\phi$ (and so that we could turn 0 into $\phi$), you must already have known the basis that $\phi$ was prepared in. Because otherwise, the measurement would have changed $\phi$. 

But here comes the rub. Imagine your state is $\phi+\psi$. In general, these are not in the same basis. Let's try to clone that one:

When we apply this operator $U$ to $\phi +\psi$, look what happens because of linearity:

$U(\phi+\psi)=U\phi+U\psi=\phi\phi+\psi\psi$    (1). 

Neato. But that is not the cloning of $\phi+\psi$. That would have been $(\phi+\psi)(\phi+\psi)$. That's not the same as Eq. (1). 

So it is possible to clone any one particular (known) state, but it is not possible to clone superpositions of states (so-called "non-orthogonal states"). So in general, cloning is impossible. That's the no-cloning theorem, due to Bill Wootters and Wojciech Zurek [1], as well as Dennis Dieks [2], who discovered the theorem independently in 1982. 

It turns out that if you could clone quantum states, all hell would break loose in the universe. This is because the quantum mechanics of entanglement is really quite powerful: Einstein called entanglement "spooky action-at-a-distance". Two entangled quantum states can seemingly "communicate" over large distances. Distances so large that if they were communicating, then this would have to occur at a speed larger than light. And you can see how Einstein would seriously object to such a proposition. Strenuously. He would knock you over the head, is what he would do. 

I say "seemingly", because even though entities A and B (often dubbed "Alice" and "Bob") that share an entangled state would obtain the same exact measurement results if they proceeded to measure the shared state (even if they were in different galaxies), it turns out that these measurements cannot be used for communication. Their measurement devices show the same result, but because this result is a random number, no information can be sent. Nada. Sleep well, Albert. 

Unless, that is, Alice or Bob (or both) could make copies of their entangled state. If they could do that, well then they could use those measurements to communicate superluminally, as was discovered by the Science Fiction writer Nick Herbert, as it turns out. (It was this conjecture that ultimately led to the formulation of the no-cloning theorem.) So, all hell would break loose if quantum cloning would be allowed, because then you could communicate superluminally, which means you could travel backwards in time, kill a great-grandparent and leave the universe in the mother of all time-paradoxes.

So quantum cloning is out, and all because of the linearity of quantum mechanics, and all that's a good thing.

Or is it?

The reason I'm asking is.... black holes, of course. You may have heard about the commotion I caused by announcing that black holes do not destroy information, because that information is copied just before it disappears in the black hole's abyss. Copied, I hasten to add, by this process called "stimulated emission" that must accompany absorption, as this ubiquitous gentleman named Albert E. told us about in 1917. But if I send into the black hole a quantum state $\phi$ rather than the classical "001010010001", wouldn't the black hole then violate the sacrosanct no-cloning theorem? The one that if you violate it should make the fabric of space time melt?

That is a question worth investigating in detail, which I have done in an article I wrote in 2006, and which is currently under review. Yes, it is another one of those.

Here's what we discovered (my collaborator in this was the same same Greg Ver Steeg that collaborated with me in the 2004 article--published ten years later--and whose blog I'm linking here):

Black holes are quantum cloning machines.

How is this possible? Well, it is possible because black holes aren't perfect. Cloners, that is. It turns out that you are allowed to clone quantum states somewhat. You can do it as long as you are sloppy. That imperfect cloning is possible was discovered by Buzek and Hillery in 1996 [3]. They showed that you can design quantum cloning machines that take a quantum state $\phi$ and transform it--not into another quantum state--but into a density matrix $\rho$ that is fairly close to $\psi$. How close you ask? This is measured by the fidelity of the cloning machine $F$, which is just the expectation value of the density matrix $\rho$ within the initial quantum state $\phi$:

$F=\langle \phi|\rho|\phi\rangle$    (1)

$F=1$ means you created a perfect cloning machine where $\rho=| \phi \rangle \langle \phi|$. If you were able to do this, then of course you already know what $\phi$ was (a "known" state, one that you had previously measured). And perfect cloning of known states is allowed, because in that case you are cloning classical, rather than quantum information. You're standing next to the copier feeding sheets to the machnie. That's right: the difference between classical and quantum information is simply whether or not you know what kind of a state you have on your hands!

Come to think of it, that's really the difference between classical and quantum mechanics right there, in a nutshell. Forget all this $\hbar\to 0$ nonsense, that is not what makes quantum different from classical. It is whether you deal with orthogonal states or not. And if your quantum states are not orthogonal, then quantumness is how non-orthogonal your state is with respect to your measurement basis.

How well can you quantum-clone then? What's the highest achievable fidelity? This question was answered by Gisin and Massar a year after quantum cloning machines were invented [6]. They found that the optimal fidelity for a machine that tries to make two copies from a single unknown quantum state is $F=5/6$. That's not bad, right? Actually, they got an even more impressive result: they showed that if you had $N$ identically prepared quantum states (prepared by someone other than you, because you remember that you are not to know what kind of quantum state you are trying to clone), and you want to create $M$ copies of this states ($M$ sort-of-copies, that is), then the best you can do with an optimal universal cloning machine is

$F=\frac{M(N+1)+N}{M(N+2)}$    (2)

The "universal" here means that this cloning machine will achieve the fidelity no matter what the initial state is. There are cloning machines that can do better for some states, and worse for others. These are called "state-dependent" cloning machines.

Now, back to black holes. What kind of cloning machines are they? What's their fidelity? This, it turns out, depends on how reflective the black hole is.

"Reflective? Aren't black holes supposed to be black, ergo non-reflective?"

Actually, not necessarily. Black holes can reflect stuff, in particular if you hurl something at the black hole somewhat at an angle (that is, not straight on, as in the figure below)


Black holes are actually surrounded by a potential barrier, which we can choose to model by a semi-transparent mirror surrounding the black hole, with reflectivity $1-\alpha$. If you read the post about the quantum capacity of black holes, then you remember this reflectivity.

So, let's first consider black holes that perfectly reflect radiation. I called them "white holes" earlier, even though this is not exactly how white holes are defined in the literature. But I really don't care, because I believe that there is a fundamental relationship between black holes and white holes that involves time-reversal, or else flipping the inside and the outside of the holes. Keep on reading if that intrigues you.

So, nothing can enter a white hole, while stuff from inside the white hole makes it out unhindered. But this white hole is very different from a mirror, because besides the reflection, it also stimulates the emission of radiation in reponse to the stuff that is reflected. And these stimulated states are the clones of the incoming states. This means of course that after you send in the quantum state $\phi$, the black hole returns two almost-clones (one from the reflection, and one from stimulated emission). And guess what: the fidelity of these clones is $F=5/6$. The white hole is an optimal universal quantum cloning machine!

Now that I impressed you with this statement, let me quickly make it even more impressive. It turns out that the black hole isn't just a $1\to 2$ cloner. Because the stimulated emission of radiation produces an arbitrary number of "copies", the white hole is a $1\to M$ cloner if you need it to be. In fact, it will be an $N\to M$ cloner if you want. And it will perform this feat with the optimal fidelity of Gisin and Massar. That's Equation (2) above.

Now, if you're not duly impressed, this is perhaps because you think that white holes aren't that interesting. But you should keep in mind that in Hawking's original formulation, black holes did not absorb anything either, as paradoxical as that sounds. Also, mirrors are not universal quantum cloners. You need the stimulated emission effect to make this possible.

Now, let us look beyond the horizon. Even though the white hole perfectly reflects the quantum states you fling at it, there is actually stuff beyond the horizon. Indeed, as described in the quantum capacity blog post, there are anti-clones behind the horizon.

"Anti-clones? Are you just all-out kidding me now?"

"Anti-clone" is a good word. But it really is a thing. Anti-clones are the stimulated "twin" of the clone outside the black hole. They must be there, because you can't just stimulate a copy without violating a bunch of conservation laws, like particle number, momentum, and whatever else characterizes the thing you send in. You've got to do this in twins: particle-anti-particle, clone and anti-clone.

"What is the fidelity of the anti-clone? Is it 5/6 also?"

The answer is no. That would be bad as I'll discuss. That anti-clone has a fidelity of 2/3.  Strange you think? Not after I'll tell you what this number represents. In fact, the fidelity of the anti-clones behind the horizon (any number $M$ of them, in fact), given that somebody sent in $N$ identically prepared copies (that were all reflected, of course), is independent of $M$ and given by

$F=\frac{N+1}{N+2}$ .   (3)

So, that's as good as you can hope to reconstruct an arbitrary quantum state using the anti-clones behind the horizon.

That's actually a very interesting result, because it happens to be Pierre Simon Laplace's "rule of succession" that he derived in the 18th century. Not in the context of black holes, mind you, but in the context of our sun.
Pierre-Simon Laplace (1745-1827)
Source: Wikimedia

This is the probability that the sun will rise tomorrow given that you have observed it to rise $N$ previous times. It assumes a "prior" that posits that both the sun rising as well as not rising are possible outcomes of your "experiment", and these are added to the $N$ total observations. Thus, out of $N+2$ events, $N+1$ have the sun rising. This correspondence is surprising, because the fidelity (3) is in fact the probability to correctly estimate the state of a quantum two-state system (while the random variable "sun rising" is classical) using $N$ classical measurements only, as was shown by Massar and Popescu in 1995 [5]. Somebody should investigate this curious coincidence, that in my view is not a coincidence at all.

All right then, to sum it up: For a white hole, the clone fidelity is the optimal 5/6, and the anti-clone fidelity (behind the curtain) is 2/3, which is the best quantum state reconstruction you can do with classical means (like, measurements).

What about perfectly absorbing black holes? I'll make that short and sweet given all that we learned. The fidelity of clones outside is 2/3, while the fidelity of the anti-clones inside is 5/6.

"It's just the opposite from the white hole situation! It is as if the inside and the outside of the black hole had been flipped!"

I told you so, didn't I. Yes, sitting inside of a black hole (if your turbine engines allow you to sit), you despair that none of your signals make it outside. Your signals are turned back towards you, as if they were reflected. As if you were looking at a white hole horizon.

To wrap this meandering post up, Greg and I did calculate the cloning fidelity (for the clones outside the horizon) for arbitrary reflectivity. It turns out that this fidelity is very close to the optimal one as long as the black hole is not too tiny (see the figure below).
Cloning fidelity as a function of the number of clones produced, for moderately-sized black holes, and different black hole absorptivities. 
So, black holes are almost optimal quantum cloners. Who knew?

Actually, this possibility was discussed briefly as early as 1990, according to Lenny Susskind in his book "The Black Hole War". Indeed, Susskind writes that he proposed (in front of Sid Coleman and Stephen Hawking) that the problem would be solved if "the region just outside the horizon is occupied by a lot of tiny invisible Xerox machines" [6, p. 227]. But he then immediately retreated from this idea, because he thought it would violate the no-cloning theorem. Which we now know it does not.

Susskind later revived the idea in his "black hole complementarity" proposal, claiming that somehow information would both fall into the black hole and be reflected at the horizon, but that the no-cloning theorem would not be violated because nobody would ever know (as you can't make an experiment both inside and outside of the black hole). This idea is, as I'm sure you can now see as clear as daylight, based on a profound misunderstanding of quantum cloning, and in particular its relation to stimulated emission of radiation.

Finally, given that Pierre-Simon Laplace occupies such an interesting place in this post, I ought to note in passsing that he is the one who invented (discovered?) the special function now called "Spherical Harmonics", which plays such a fundamental role in quantum physics (as part of the wavefunction of the hydrogen atom).  Welcome home, Laplace!

The subject matter discussed in this blog post is in Ref. [7], and currently under review. I will update this post with the exact reference once the paper has appeared.

References

[1] W. K. Wootters and W. H. Zurek, A single quantum cannot be cloned. Nature 299, 802 (1982) 
[2] D. Dieks, Communication by EPR devices. Phys. Lett. A 92, 271 (1982). 
[3] V. Buzek and M. Hillery, Quantum copying: Beyond the no-cloning theorem. Phys. Rev. A 54, 1844 (1996) 
[4] N. Gisin and S. Massar, Opimal quantum cloning ma- chines. Phys. Rev. Lett. 79, 2153–2156 (1997). 
[5] S. Massar and S. Popescu,  Optimal extraction of information from finite quantum ensembles. Phys. Rev. Lett. 74, 259–1263 (1995).
[6] L. Susskind, The Black Hole War. Back Bay Books, 2008.
[7] C. Adami and G. Ver Steeg, Black holes are almost optimal quantum cloners, quant-ph/0601065 (2006)







Monday, November 18, 2013

Darwin inside the machine: A brief history of digital life

In 1863, the British writer Samuel Butler wrote a letter to the newspaper The Press entitled "Darwin Among the Machines". In this letter, Butler argued that machines had some sort of "mechanical life", and that machines would some day evolve to be more complex than people, and ultimately bring humanity into extinction:

Samuel Butler (1835-1902)
Source: Wikimedia 
"Day by day, however, the machines are gaining ground upon us; day by day  we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question."   
(S. Butler, 1863)

While futurist and doomsday prognosticator Ray Kurzweil would probably agree, I think that the realm of the machines is still far in the future. Here I would like to argue that Darwin isn't among the machines just yet, but he is certainly inside the machines.

The realization that you could observe life inside a computer was fairly big news in the early 1990s. The history of digital life has been chronicled before, but perhaps it is time for an update, because a lot has happened in twenty years. I will try to be brief: A Brief History of Digital Life. But you know how I have a tendency to fail in this department.

Who came up with the idea that life could be abstracted to such a degree that it could be implemented (mind you, this is not the same thing as simulated) inside a computer? 

Why, that would be just the same guy who actually invented the computer architecture we all use! You all know who this is, but here's a pic anyway:

John von Neumann (Source: Wikimedia)
I don't have many scientific heroes, but he is perhaps my number one. He was first and foremost a mathematician (who also made fundamental contributions to theoretical physics). After he invented the von Neumann architecture of modern computers, he asked himself: could I create life in it? 

Who would ask himself such a question? Well, Johnny did! He asked: if I could program an entity that contained the code that would create a copy of itself, would I have created life? Then he proceeded to try to program just such an entity, in terms of a cellular automaton (CA) that would self-replicate. 

Maybe you thought that Stephen Wolfram invented CAs? He might want to convince you of that, but the root of CAs goes back to Stanislaw Ulam, and indeed our hero Johnny. (If Johnny isn't your hero yet, I'll try to convince you that he should be. Did you know he invented Game Theory?) Johnny actually wrote a book called "Theory of Self-Reproducing Automata" that was only published after von Neumann's death. He died comparatively young, at age 54. Johnny was deeply involved in the Los Alamos effort to build an atomic bomb, and was present at the 1946 Bikini nuclear tests.  He may have paid the ultimate prize for his service, dying of cancer likely due to radiation exposure. Incidentally Richard Feynman also succumbed to cancer from radiation, but he enjoyed a much longer life. We can only imagine what von Neumann could have given us had he had the privilege of living into his 90s, as for example Hans Bethe did. And just like that, I listed two other scientists on my hero list! They both are in my top 5. 

All right, let's get back to terra firma. Johnny invented Cellular Automata just so that he can study self-replicating machines. What he did was create the universal constructor, albeit completely in theory. But he designed it in all detail: a 29-state cellular automaton that would (when executed) literally construct itself. It was a brave (and intricately detailed) construction, but he never got to implement it on an actual computer. This was done almost fifty years later by Nobili and Pesavento, who used a 32-state CA. They were able to show that von Neumann's construction  ultimately was able to self-reproduce, and even led to the inheritance of mutations.

The Nobili-Pesavento implementation of von Neumann's self-replicating CA, with two copies visible.
Source: Wikimedia 
von Neumann used information coded in binary in a "tape" of cells to encode the actions of the automaton, which is quite remarkable given that DNA had yet to be discovered. 

Perhaps because of von Neumann's untimely death, or perhaps because computers would soon be used for more "serious" applications than making self-replicating patterns, this work was not continued. It was only in the late 1980s that Artificial Life (in this case, creating a form of life inside of a computer) became  fashionable again, when Chris Langton started the Artificial Life conferences in Santa Fe, New Mexico. While Langton's work focused also on implementations using CA, Steen Rasmussen at Los Alamos National Laboratory tried another approach: take the idea of computer viruses as a form of life seriously, and create life by giving self-replicating computer programs the ability to mutate. To do this, he created self-replicating computer programs out of a computer language that was known to support self-replication: "Redcode", the language used in the computer game Core War.  In this game that was popular in the late 80s, the object is to force the opposing player's programs to terminate. One way to do this is to write a program that self-replicates.
Screen shot of a Core War game, showing the programs of two opposing players in red and green. Source: Wkimedia
Rasmussen created a simulated computer within a standard desktop, provided the self-replicator with a mutation instruction, and let it loose. What he saw was first of all quick proliferation of the self-replicator, followed by mutated programs that not only replicated inaccurately, but also wrote over the code of un-mutated copies. Soon enough no self-replicator with perfect fidelity would survive, and the entire population died out, inevitably. The experiment was in itself a failure, but it ultimately led to the emergence of digital life as we know it, because when Rasmussen demonstrated his "VENUS" simulator at the Santa Fe Institute, a young tropical ecologist was watching over his shoulder: Tom Ray of the University of Oklahoma. Tom quickly understood what he had to do in order to make the programs survive. First, he needed to give each program a write-protected space. Then, in order to make the programs evolvable, he needed to modify the programming language so that instructions did not refer directly to addresses in memory, simply because such a language turns out to be very fragile under mutations. 

Ray went ahead and wrote his own simulator to implement these ideas, and called it tierra.  Within the simulated world that Ray had created inside the computer, real information self-replicated, and mutated. I am writing "real" because clearly, the self-replicating programs are not simulated. They exist in the same manner as any computer program exists within a computer's memory: as instructions encoded in bits that themselves have a physical basis: different charge states of a capacitor. 
Screen shot of an early tierra simulator, showing the abundance distribution of individual genotypes in real time. Source: Wikimedia. 
The evolutionary dynamics that Ray observed was quite intricate. First, the 80 instruction-long self-replicator that Ray had painstakingly written himself started to evolve towards smaller sizes, shrinking, so to speak. And while Ray suspected that no program could self-replicate that was smaller than, say, 40 instructions long, he witnessed the sudden emergence of an organism that was only 20 lines long. These programs turned out to be parasites that "stole" the replication program of a "host" (while the programs were write-protected, Ray did not think he should insist on execution protection). Because the parasites did not need the "replication gene" they could be much smaller, and because the time it takes to make a copy is linear in the length of the program, these parasites replicated like crazy, and would threaten to drive the host programs to extinction. 

But of course that wouldn't work, because the parasites relied on those hosts! Even better, before the parasites could wipe out the hosts, a new host emerged that could not be exploited by the parasite: the evolution of resistance. In fact, a very classic evolutionary arms race ensued, leading ultimately to a mutualistic society. 

While what Ray observed was incontrovertibly evolution, the outcome of most experiments ended up being much of the same: shrinking program, evolution of parasites, an arms race, and ultimately coexistence. When I read that seminal 1992 paper [1] on a plane from Los Angeles to New York shortly after moving to the California Institute of Technology, I immediately started thinking about what one would need to do in order to make the programs do something useful. The tierran critters were getting energy for free, so they simply tried to replicate as fast as possible. But energy isn't free: programs should be doing work to gain that energy. And inside a computer, work is a calculation. 

After my postdoctoral advior Steve Koonin (a nuclear physicist, because the lab I moved to at Caltech was a nuclear theory lab) asked me (with a smirk) if I had liked any of the papers he had given me to read on the plane, I did not point to any of the light-cone QCD papers, I told him I liked that evolutionary one. He then asked: "Do you want to work on it?", and that was that.

I started to rewrite tierra so that programs had to do math in order to get energy. The result was this paper, but I wasn't quite happy with tierra. I wanted it to do much more: I wanted the digital critters to grow in a true 2D space (like, say, on a Petri dish)
A Petri dish with competing E. coli bacteria. Source: BEACON (MSU).
and I wanted them to evolve a complex metabolism based on computations. But writing computer code in C wasn't my forte: I was an old-school Fortran programmer. So I figured I'd pawn the task off to some summer undergraduate students. Two were visiting Caltech that summer: C. Titus Brown who was an undergrad in Mathematics at Reed College, and Charles Ofria, a computer science undergrad at Stony Brook University, where I had gotten my Ph.D. a few years earlier. I knew both of them because Titus is the son of theoretical physicist Gerry Brown, in whose lab I obtained my Ph.D, and Charles used to go to high school with Titus. 
From left: Chris Adami, Charles Ofria, Cliff Bohm, C. Titus Brown (Summer 1993)
Above is a photo taken during the summer when the first version of Avida was written, and if you clicked on any of the links above then you know that Titus and Charles have moved on from being undergrads. In fact, as fate would have it we are all back together here at Michigan State University, as the photo below documents. Where we were attempting to recreate that old polaroid!
Same cast of characters, 20 years later. This one was taken not in my rented Caltech appartment, but in CTB's sprawling Michigan mansion.  
The version of the Avida program that ultimately led to 20 years of research in digital evolution (and utlimately became one of the cornerstones of the BEACON Center) was the one written by Charles. (Whenever I asked any of the two to just modify Tom Ray's tierra, they invariably proceeded by rewriting everything from scratch. I clearly had a lot to learn about programming.)

So what became of this digital revolution of digital evolution? Besides germinating the BEACON Center for the Study of Evolution in Action, Avida has been used for more and more sophisticated experiments in evolution, and we think that we aren't done by a long shot. Avida is also used to teach evolution, in high-school and college class rooms.

Avida: the digital life simulator developed at Caltech and now under active development at Michigan State University, exists as a research as well as an educational version. Source: BEACON Institute. 
Whenever I give a talk or class on the history of digital life (or even its future), I seem to invariably get one question that wonders whether revealing the power that is immanent in evolving computer viruses is, to some extent, reckless.

You see, while the path to the digital critters that we call "avidians" was never really inspired by real computer viruses, you had to be daft not to notice the parallel. 

What if real computer viruses could mutate and adapt to their environment, almost instantly negating any and all design efforts of the anti-malware industry? Was this a real possibility?

Whenever I was asked this question, in a public talk or privately, I would equivocate. I would waffle. I had no idea. 

After a while I told myself: "Shouldn't we know the answer to this question? Is it possible to create a new class of computer viruses that would make all existing cyber-security efforts obsolete?"

Because if you think about it, computer viruses (the kind that infect your computer once in a while if you're not careful) already displays some signs of life. I'll show you here that one of the earliest computer viruses (known as the "Stoned" family) displayed one sign, namely the tell-tale waxing and waning of infection rate as a function of time.
Incidents of infection with the "Stoned" virus over time, courtesy of [2].
Why does the infection rate rise and fall? Well, because the designers of the operating system (the Stoned virus infected other computers only by direct contact: an infected floppy disk) were furiously working on thwarting this threat. But the virus designers (well, nobody called them that, really--they were called "hackers") were just as furiously working on defeating any and all countermeasures. A  real co-evolutionary arms race ensued, and the result was that the different types of Stoned viruses created in response to the selective pressure imparted by operating system designers could be rendered in terms of a phylogeny of viruses that is very reminiscent of the phylogeny of actual biochemical viruses (think influenza, see below).
Phylogeny of  Stoned computer viruses (credit: D.H. Hull)
What if these viruse could mutate autonomously (like real biochemical viruses) rather than wait for the "intelligent design" of hackers? Is this possible?

I did not know the answer to this question, but in 2006 I decided to find out.

And to find out, I had to try as hard as I could to achieve the dreaded outcome. The thinking was: If my lab, trained in all ways to make things evolve, cannot succeed in creating the next-generation malware threat, then perhaps no-one can. Yes, I realize that this is nowhere near a proof. But we had to start somewhere. But if we were able to do this, then we would know the vulnerabilities of our current cyber-infrastructure long before the hackers did. We would be playing white hat vs. black hat, for real. But we would do this completely secretly.

In order to do this, I talked to a private foundation, which agreed to provide funds for my lab to investigate the question, provided we kept strict security protocols. No work is to be carried out on computers connected to the Internet. All notebooks are to be kept locked up in a safe. Virus code is only transferred from computer to computer via CD-ROM, also to be stored in a safe. There were several other protocol guidelines, which I will spare you. The day after I received notice that I was awarded the grant, I went and purchased a safe.

To cut a too long story into a caricature of "short": it turned out to be exceedingly difficult to create evolving computer viruses. I could devote an entire blog post to outline all the failed approached that we took (and I suspect that such a post would be useful for some segments of my readership). My graduate student Dimitris Iliopoulos set up a computer (disconnected, of course)  with a split brain: one where the virus evolution would take place, and one that monitored the population of viruses that replicated--not in a simulated environment--but rather in the brutal reality of a real computer's operating system.

Dimitris discovered that the viruses did not evolve to be more virulent. They became as tame as possible. Because we had a "watcher" program monitoring the virus population, (culling individuals in order to keep the population size constant) programs evolved to escape the atttention of this program. Because being noticed by said program would spell death, ultimately.

This strategy of "hiding" turns out to be fairly well-known amongst biochemial viruses, of course. But our work was not all in vane. We contacted one of the leading experts in computer security at the time, Hungarian-born Péter Ször, who worked at the computer security company Symantec and wrote the book on computer viruses. He literally wrote it: you can buy it on Amazon here.

When we first discussed the idea with Péter, he was skeptical. But he soon warmed up to the idea, and provided us with countless examples of how computer viruses adapt--sometimes by accident, sometimes by design. We ended up writing a paper together on the subject, which was all the rage at the Virus Bulletin conference in Ottawa, in 2008 [3]. You can read our paper by clicking on this link here.

Which bring me, finally, to the primary reason why I am reminiscing about the history of digital life, and my collaboration with Péter Ször in particular. Péter passed away suddenly just a few days ago. He was 43 years old. He worked at Symantec for the majority of his career, but later switched to McAfee Labs as Senior Director of Malware Research. Péter kept your PC (if you choose to use such a machine) relatively free from this artfully engineered brand of viruses for decades. He worried whether evolution could ultimately outsmart his defenses and, at least for this brief moment in time, we thought we could.

Peter Szor (1970-2013)
[1] T.S. Ray, An approach to the synthesis of life, In : Langton, C., C. Taylor, J. D. Farmer, & S. Rasmussen [eds], Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, vol. XI, 371-408. Redwood City, CA: Addison-Wesley.

[2] S. White, J. Kephart, D. Chess.  Computer Viruses: A Global Perspective. In: Proceedings of the 5th Virus Bulletin International Conference, Boston, September 20-22, 1995, Virus Bulletin Ltd, Abingdon, England, pp. 165-181. September 1995

[3] D. Iliopoulos, C. Adami, and P. Ször. Darwin Inside the Machines: Malware Evolution and the Consequences for Computer Security (2009). In: Proceedings of VB2008 (Ottawa), H. Martin ed., pp. 187-194

Saturday, November 2, 2013

Black holes and the fate of quantum information

I have written about the fate of classical information interacting with black holes fairly extensively on this blog (see Part 1, Part 2, and Part 3). Reviewers of the article describing those results nearly always respond that I should be considering the fate of quantum, not classical information. 

In particular, they ask me to comment on what all this means in the light of more modern controversies, such as black hole complementarity and firewalls. As if solving the riddle of what happens to classical information is not nearly good enough. 

I should first state that I disagree with the idea that it is necessary to discuss the fate of quantum information in an article that discusses what happens to classical information. I'll point out the differences between those two concepts here, and hopefully I'll convince you that it is perfectly reasonable to discuss these independently. However, I have given in to these requests, and now written an article (together with my colleague Kamil Bradler at St. Mary's University in Halifax, Canada) that studies the fate of quantum information that interacts with a black hole. Work on this manuscript explains (in part) my prolonged absence from blogging.

The results we obtained, it turns out, do indeed shed new light on these more modern controversies, so I'm grateful for the reviewer's requests after all. The firewalls have "set the physics world ablaze", as one blogger writes.  These firewalls (that are suggested to surround a black hole) have been invented to correct a perceived flaw in another widely discussed theory, namely the theory of black hole complementarity due to the theoretical physicist Leonard Susskind.  I will briefly describe these in more detail below, but before I can do this, I have to define for you the concept of quantum entanglement.

Quantum entanglement lies at the very heart of quantum mechanics, and it is what makes quantum physics different from classical physics. It is clear, as a consequence, that I won't be able to make you understand quantum entanglement if you have never studied quantum mechanics. If this is truly your first exposure, you should probably consult the Wiki page about quantum entanglement, which is quite good in my view. 

Quantum entanglement is an interaction between two quantum states that leaves them in a joint state that cannot be described in terms of the properties of the original states. So, for example, two quantum states $\psi_A$ and $\psi_B$ may have separate properties before entanglement, but after they interact they will be governed by a single wavefunction $\psi_{AB}$ (there are exceptions). So for example, if I imagine a wavefunction $\psi_A=\sigma|0\rangle +\tau|1\rangle$ (assuming the state to be correctly normalized) and a quantum state B simply given by $|0\rangle$, then a typical entangling operation $U$ will leave the joint state entangled:

      $U(\sigma|0\rangle +\tau|1\rangle)|0\rangle=\sigma|00\rangle +\tau|11\rangle$.    (1)

The wavefunction on the right hand side is not a product of the two initial wavefunctions, and in any case classical systems can never be brought into such a superposition of states in the first place. Another interesting aspect of quantum entanglement is that it is non-local. If A and B represent particles, you can still move one of the particles far away (say, to another part in the galaxy). They will still remain entangled. Classical interactions are not like that. At all.

A well-known entangled wavefunction is that of the Einstein-Podolsky-Rosen pair, or EPR pair. This is a wavefunction just like (1), but with $\sigma=\tau=1/\sqrt{2}$. The '0' vs '1' state can be realized via any physical quantum two-state system, such as a spin 1/2-particle or a photon carrying a horizontal or vertical polarization. 

What does it mean to send quantum information? Well, it just means sending quantum entanglement! Let us imagine a sender Alice, who controls a two-state quantum system that is entangled with another system (let us call it $R$ for "reference"), this means that her quantum wavefunction (with respect to $R$) can be written as 

                $|\Psi\rangle_{AR}=\sigma|00\rangle_{AR} +\tau|11\rangle_{AR}$    (2)

where the subscripts $AR$ refer to the fact that the wavefunction now "lives" in the joint space $AR$. $A$ and $R$ (after entanglement) do not have individual states any longer.

Now, Alice herself should be unaware of the nature of the entanglement between $A$ and $R$ (meaning, Alice does not know the values of the complex constants $\sigma$ and $\tau$). She is not allowed to know them, because if she did, then the quantum information she would send would become classical. Indeed, Alice can turn any quantum information into classical information by measuring the quantum state before sending it. So let's assume Alice does not do this. She can still try to send the arbitrary quantum state that she controls to Bob, so that after the transmittal her quantum state is unentangled with $R$, but it is now Bob's wavefunction that reads

           $|\Psi\rangle_{BR}=\sigma|00\rangle_{BR} +\tau|11\rangle_{BR}$    (3). 

In this manner, entanglement was transferred from $A$ to $B$. That is a quantum communication channel.

Of course, lots of things could happen to the quantum entanglement on its way to Bob. For example, it could be waylayed by a black hole. If Alice sends her quantum entanglement into a black hole, can Bob retrieve it? Can Bob perform some sort of magic that will leave the black hole unentangled with $A$ (or $R$), while he himself is entangled as in (3)?

Whether or not Bob can do this depends on whether the quantum channel capacity of the black hole is finite, or whether it vanishes. If the capacity is zero, then Bob is out of luck. The best he can do is to attempt to reconstruct Alice's quantum state using classical state estimation techniques. That's not nothing by the way, but the "fidelity" of the state reconstruction is at most 2/3. But I'm getting ahead of myself.

Let's first take a look at this channel. I'll picture this in a schematic manner where the outside of the black hole is at the bottom, and the inside of the black hole is at the top, separated by the event horizon. Imagine Alice sending her quantum state in from below. Now, black holes (as all realistic black bodies) don't just absorb stuff: they reflect stuff too. How much is reflected depends on the momentum and angular momentum of the particle, but in general we can say that a black hole has an absorption coefficient $0\leq\alpha\leq1$, so that $\alpha^2$ is just the probability that a particle that is incident on the black hole is absorbed.



So we see that if $n$ particles are incident on a black hole (in the form of entangled quantum states $|\psi\rangle_{\rm in}$), then $(1-\alpha^2)n$ come out because they are reflected at the horizon. Except as we'll see, they are in general not the pure quantum states Alice sent in anymore, but rather a mixture $\rho_{\rm out}$. This is (as I'll show you) because the black hole isn't just a partially silvered mirror. Other things happen, like Hawking radiation. Hawking radiation is the result of quantum vacuum fluctuations at the horizon, which constantly create particle-antiparticle pairs. If this happened anywhere but at the event horizon, the pairs would annihilate back, and nobody would be the wiser. Indeed, such vacuum fluctuations happen constantly everywhere in space. But if it happens at the horizon, then one of the particles could cross the horizon, while the other (that has equal and opposite momentum), speeds away from it. That now looks like the black hole radiates. And it happens at a fixed rate that is determined by the mass of the black hole. Let's just call this rate $\beta^2$.


As you can see, the rate of spontaneous emission does not depend on how many particles Alice has sent in. In fact, you get this radiation whether or not you throw in a quantum state. These fluctuations go on before you send in particles, and after. They have absolutely nothing to do with $|\psi\rangle_{\rm in}$. They are just noise. But they are (in part) responsible for the fact that the reflected quantum state $\rho_{\rm out}$ is not pure anymore. 

But I can tell you that if this were the whole story, then physics would be in deep deep trouble. This is because you cannot recover even classical information from this channel if $\alpha=1$. Never mind quantum. In fact, you could not recover quantum information even in the limit $\alpha=0$, a perfectly reflecting black hole! (I have not shown you this yet, but I will). 

This is not the whole story, because a certain gentleman in 1917 wrote a paper about what happens when radiation is incident on a quantum mechanical black body. Here is a picture of this gentleman, along with the first paragraph of the 1917 paper:


Albert Einstein in 1921                        Einstein's 1917 article
              (source: Wikimedia)            "On the quantum theory of radiation"
What Einstein discovered in that paper is that you can derive Planck's Law (about the distribution of radiation emanating from a black body) using just the quantum mechanics of absorption, spontaneous emission, and stimulated emission of radiation. Stimulated emission is by now familar to everybody, because it is the principle upon which Lasers are built. What Einstein showed in that paper is that stimulated emission is an inevitable consequence of absorption: if a black body absorbs radiation, it also stimulates the emission of radiation, with the same exact quantum numbers as the incoming radiation.

Here's the figure from the Wiki page that shows how stimulated emission makes "two out of one":

Quantum "copying" during stimulated emission from an atom (source: Wikimedia)


In other words, all black bodies are quantum copying machines!

"But isn't quantum copying against the law?"

Actually, now that you mention it, yes it is, and the law is much more stringent than the law against classical copying (of copy-righted information, that is). The law (called the no-cloning theorem) is such that it cannot--ever--be broken, by anyone or anything. 

The reason why black bodies can be quantum copying machines is that they don't make perfect copies, and the reason the copies aren't perfect is the presence of spontaneous emission, which muddies up the copies. This has been known for 30 years mind you, and was first pointed out by the German-American physicist Leonard Mandel. Indeed, only perfect copying is disallowed. There is a whole literature on what is now known as "quantum cloning machines" and it is possible to calculate what the highest allowed fidelity of cloning is. When making two copies from one, the highest possible fidelity is $F$=5/6. That's an optimal 1->2 quantum cloner. And it turns out that in a particular limit (as I point out in this paper from 2006) black holes can actually achieve that limit!  I'll point out what kind of black holes are optimal cloners further below.

All right, so now we have seen that black holes must stimulate the emission of particles in response to incoming radiation. Because Einstein said they must. The channel thus looks like this:


In addition to the absorbed/reflected radiation, there is spontaneous emission (in red), and there is stimulated emission (in blue). There is something interesting about the stimulated "clones". (I will refer to the quantum copies as clones even though they are not perfect clones, of course. How good they are is central to what follows). 

Note that the clone behind the horizon has a bar over it, which denotes "anti". Indeed, the stimulated stuff beyond the horizon consists of anti-particles, and they are referred to in the literature as anti-clones, because the relationship between $\rho_{\rm out}$ and $\bar \rho_{\rm out}$ is a quantum mechanical NOT operation. (Or, to be technically accurate, the best NOT you can do without breaking quantum mechanics.) That the stimulated stuff inside and outside the horizon must be particles and anti-particles is clear, because the process must conserve particle number. We should keep in mind that the Hawking radiation also conserves particle number. The total number of particles going in is $n$, which is also the total number of particles going out (adding up stuff inside and outside the horizon). I checked. 

Now that we know that there are a bunch of clones and anti-clones hanging around, how do we use them to transfer the quantum entanglement? Actually, we are not interested here in a particular protocol, we are much more interested in whether this can be done at all. If we would like to know whether a quantum state can be reconstructed (by Bob) perfectly, then we must calculate the quantum capacity of the channel. While how to do this (and whether this calculation can be done at all) is technical, one thing is not: If the quantum capacity is non-zero then, yes, Bob can reconstruct Alice's state perfectly (that is, he will be entangled with $R$ exactly like Alice was when he's done). If it is zero, then there is no way to do this, period.

In the paper I'm blogging about, Kamil and I did in fact calculate the capacity of the black hole channel, but only for two special cases: $\alpha=0$ (a perfectly reflecting black hole), and $\alpha=1$ (a black hole that reflects nothing). The reason we did not tackle the general case is that at the time of this writing, you can only calculate the quantum capacity of the black hole channel exactly for these two limiting cases. For a general $\alpha$, the best you can do is give a lower and an upper bound, and we have that calculation planned for the future. But the two limiting cases are actually quite interesting.

[Note: Kamil informed me that for channels that are sufficiently "depolarizing", the capacity can in fact be calculated, and then it is zero. I will comment on this below.]

First: $\alpha=0$. In that case the black hole isn't really a black hole at all, because it swallows nothing. Check the figure up there, and you'll see that in the absorption/reflection column, you have nothing in black behind the horizon. Everything will be in front. How much is reflected and how much is absorbed doesn't affect anything in the other two columns, though. So this black hole really looks more like a "white hole", which in itself is still a very interesting quantum object. Objects like that have been discussed in the literature (but it is generally believed that they cannot actually form from gravitational collapse). But this is immaterial for our purposes: we are just investigating here the quantum capacity of such an object in some extreme cases. For the white hole, you now have two clones outside, and a single anticlone inside (if you would send in one particle). 


Technical comment for experts: 
A quick caveat: Even though I write that there are two clones and a single anti-clone after I send in one particle, this does not mean that this is the actual number of particles that I will measure if I stick out my particle detector, dangling out there off of the horizon. This number is the mean expected number of particles. Because of vacuum fluctuations, there is a non-zero probability of measuring a hundred million particles. Or any other number.  The quantum channel is really a superposition of infinitely many cloning machines, with the 1-> 2 cloner the most important. This fundamental and far-reaching result is due to Kamil. 

So what is the capacity of the channel? It's actually relatively easy to calculate because the channel is already well-known: it is the so-called Unruh channel that also appears in a quantum communication problem where the receiver is accelerated, constantly. The capacity looks like this:


Quantum capacity of the white hole channel as a function of z
In that figure, I show you the capacity as a function of $z=e^{-\omega/T}$, where $T$ is the temperature of the black hole and $\omega$ is the frequency (or energy) of that mode. For a very large black hole the temperature is very low and, as a consequence, the channel isn't very noisy at all (low $z$). The capacity therefore is nearly perfect (close to 1 bit decoded for every bit sent). When black holes evaporate, they become hotter, and the channel becomes noisier (higher $z$). For infinitely small black holes ($z=1$) the capacity finally vanishes. But so does our understanding of physics, of course, so this is no big deal. 

What this plot implies is that you can perfectly reconstruct the quantum state that Alice daringly sent into the white hole as long as the capacity $Q$ is larger than zero. (If the capacity is small, it would just take you longer to do the perfect reconstructing.). I want to make one thing clear here: the white hole is indeed an optimal cloning machine (the fidelity of cloning 1->2 is actually 5/6, for each of the two clones). But to recreate the quantum state perfectly, you have to do some more work, and that work requires both clones. But after you finished, the reconstructed state has fidelity $F=1$.) 

"Big deal" you might say, "after all the white hole is a reflector!"

Actually, it is a somewhat big deal, because I can tell you that if it wasn't for that blue stimulated bit of radiation in that figure above, you couldn't do the reconstruction at all! 

"But hold on hold on", I hear someone mutter, from far away. "There is an anti-clone behind the horizon! What do you make of that? Can you, like, reconstruct another perfect copy behind the horizon? And then have TWO?"

So, now we come to the second result of the paper. You actually cannot. The capacity of the channel into the black hole (what is known as the complementary channel) is actually zero because (and this is technical speak) the channel into the black hole is entanglement breaking. You can't reconstruct perfectly from a single clone or anti-clone, it turns out. So, the no-cloning theorem is saved. 

Now let's come to the arguably more interesting bit: a perfectly absorbing black hole ($\alpha$=1). By inspecting the figure, you see that now I have a clone and an anti-clone behind the horizon, and a single clone outside (if I send in one particle). Nothing changes in the blue and red lines. But everything changes for the quantum channel. Now I can perfectly reconstruct the quantum state behind the horizon (as calculating the quantum capacity will reveal), but the capacity in front vanishes! Zero bits, nada, zilch. If $\alpha=1$, the channel from Alice to Bob is entanglement breaking.  


It is as if somebody had switched the two sides of the black hole! 
Inside becomes outside, and outside becomes inside!

Now let's calm down and ponder what this means. First: Bob is out of luck. Try as he might, he cannot have what Alice had: the same entanglement with $R$ that she enjoyed. Quantum entanglement is lost when the black hole is perfectly absorbing. We have to face this truth.  I'll try to convince you later that this isn't really terrible. In fact it is all for the good. But right now you may not feel so good about it.

But there is some really good news. To really appreciate this good news, I have to introduce you to a celebrated law of gravity, the equivalence principle

The principle, due to the fellow whose pic I have a little higher up in this post, is actually fairly profound. The general idea is that an observer should not be able to figure out whether she is, say, on Earth being glued to the surface by 1g, or whether she is really in a spaceship that accelerates at the rate of 1g (g being the constant of gravitational acceleration on Earth, you know: 9.81 m/sec$^2$). The equivalence principle has far reaching consequences. It also implies that an observer (called, say, Alice), who falls towards (and ultimately into) a black hole, should not be able to figure out when and where she passed the point of no return. 

The horizon, in other words, should not appear as a special place to Alice at all. But if something dramatic would happen to quantum states that cross this boundary, Alice would have a sure-fire way to notice this change: she could just keep the quantum state in a protected manner at her disposition, and constantly probe this state to find out if anything happened to it. That's actually possible using so-called "non-demolition" experiments. So, unless you feel like violating another one of Einstein's edicts (and, frankly, the odds are against you if you do), you better hope nothing happens to a quantum state that crosses from the outside to the inside of a black hole in the perfect absorption case ($\alpha=1$). 

Fortunately, we proved (result No. 3) that you can perfectly reconstruct the state behind the horizon when $\alpha=1$, that this capacity is non-zero. And that as a consequence, the equivalence principle is upheld. 

This may not appear to you as much of a big deal when you read this, but many many researchers have been worried sick about this, that the dynamics they expect in black holes would spell curtains for the equivalence principle. I'll get back to this point, I promise. But before I do so, I should address a more pressing question.


"If Alice's quantum information can be perfectly reconstructed behind the horizon, 
what happens to it in the long run?"

This is a very serious question. Surely we would like Bob to be able to "read" Alice's quantum message (meaning he yearns to be entangled just like she was). But this message is now hidden behind the black hole event horizon. Bob is a patient man, but he'd like to know: "Will I ever receive this quantum info?"

The truth is, today we don't know how to answer this question. We understand that Alice's quantum state is safe and sound behind the horizon--for now. There is also no reason to think that the ongoing process of Hawking radiation (that leads to the evaporation of the black hole) should affect the absorbed quantum state. But at some point or other, the quantum black hole will become microscopic, so that our cherished laws of physics may lose their validity. At that point, all bets are off. We simply do not understand today what happens to quantum information hidden behind the horizon of a black hole, because we do not know how to calculate all the way to very small black holes. 

Having said this, it is not inconceivable that at the end of a black hole's long long life, the only thing that happens is the disappearance of the horizon. If this happens, two clones are immediately available to an observer (the one that used to be on the outside, and the one that used to be inside), and Alice's quantum state could finally be resurrected by Bob, a person that no doubt would merit to be called the most patient quantum physicist in the history of all time. 


Now what does this all mean for black hole physics? I have previously shown that classical information is just fine, and that the universe remains predictable for all times. This is because to reconstruct classical information, a single stimulated clone is enough. It does not matter what $\alpha$ is, it could even be one. Quantum information can be conveyed accurately if the black hole is actually a white hole, but if it is utterly black then quantum information is stuck behind the horizon, even though we have a piece of it (a single clone) outside of the horizon. But that's not enough, and that's a good thing too, because we need the quantum state to be fully reconstructable inside of the black hole, otherwise the equivalence principle is hosed.  And if it reconstructable inside, then you better hope it is not reconstructable outside, because otherwise the no-cloning theorem would be toast. 

So everything turns out to be peachy, as long as nothing drastic happens to the quantum state inside the black hole. We have no evidence of something so drastic, but at this point we simply do not know. 

Now what are the implications for black hole complementarity? The black hole complementarity principle was created from the notion (perhaps a little bit vague) that, somehow, quantum information is both reflected and absorbed by the black hole channel at the same time. Now, given that you have read this far in this seemingly interminable post, you know that this is not allowed. It really isn't. What Susskind, Thorlacius, and 't Hooft  argued for, however, is that it is OK as long as you won't be caught. Because, they argued, nobody will be able to measure the quantum state on both sides of the horizon at the same time anyway!

Now I don't know about you, but I was raised believing that just because you can't be caught it doesn't make it alright to break the rules.  And what our more careful analysis of quantum information interacting with a black hole has shown, is that you do not break the quantum cloning laws at all. Both the equivalence principle and the no-cloning theorem are perfectly fine. Nature just likes these laws, and black holes are no outlaws.

Adventurous Alice encounters a firewall? Credit: Nature.com
What about firewalls then? Quantum firewalls were proposed to address a perceived inconsistency in the black hole complementarity picture. But you now already know that that picture was inconsistent to begin with. Violating no-cloning laws brings with it all kinds of paradoxes. Unfortunately, the firewall hypothesis just heaped paradoxes upon paradoxes, because it proposed that you have to violate the equivalence principle as well. This is because that hypothesis assumes that all the information was really stored in the Hawking radiation (the red stuff in the figures above). But there is really nothing in there, so that the entire question of whether transmitting quantum information from Alice to Bob violates the concept of "monogamy of entanglement" is moot. The Hawking radiation can be entangled with the black hole, but it is no skin off of Alice or Bob, that entanglement is totally separate. 

So, all is well, it seems, with black holes, information, and the universe. We don't need firewalls, and we do not have to invoke a "complementarity principle". Black hole complementarity is automatic, because even though you do not have transmission when you have perfect reflection, a stimulated clone does make it past the horizon. And when you have perfect transmission ($\alpha$=1) a stimulated clone still comes back at you. So it is stimulated emission that makes black hole complementarity possible, without breaking any rules. 

Of course we would like to know the quantum capacity for an arbitrary $\alpha$, which we are working on. One result is already clear: if the transmission coefficient $\alpha$ is high enough that not enough of the second clone is left outside of the horizon, then the capacity abruptly vanishes. Because the black hole channel is a particular case of a "quantum depolarizing channel", discovering what this critical $\alpha$ is only requires mapping the channel's error rate $p$ to $\alpha$. 

I leave you with an interesting observation. Imagine a black hole channel with perfect absorption, and put yourself into the black hole. Then, call yourself "Complementary Alice", and try to send a quantum state across the horizon. You immediately realize that you can't: the quantum state will be reflected. The capacity to transmit quantum information out of the black hole vanishes, while you can perfectly communicate quantum entanglement with "Complementary Bob". Thus, from the inside of the perfectly absorbing black hole it looks just like the white hole channel (and of course the reverse is true for the perfectly reflecting case). Thus, the two channels are really the same, just viewed from different sides! 

This becomes even more amusing if you keep in mind that (eternal) black holes have white holes in their past, and white holes have black holes in their future.