Eqs

Monday, May 27, 2013

What is Information? (Part 2: The Things We Know)

In the first part of this post I have talked to you about entropy mostly. How the entropy of a physical system (such as a die, a coin, or a book) depends on the measurement device that you will use for querying that system. That, come to think of it, the uncertainty (or entropy) of any physical object really is infinite, and made finite only by the finiteness of our measurement devices. If you start to think about it, of course the things you could possibly know about any physical object is infinite! Think about it! Look at any object near to you. OK, the screen in front of you. Just imagine a microscope zooming in on the area framing the screen, revealing the intricate details of the material. The variations that the manufacturing process left behind, making each and every computer screen (or iPad or iPhone), essentially unique.

If this was another blog, I would now launch into a discussion of how there is a precise parallel (really!) to renormalization theory in quantum field theory... but it isn't. So, let's instead delve head first into the matter, and finally discuss the concept of information.

What does it even mean to have information? Yes, of course, it means that you know something. About something. Let's make this more precise. I'll conjure up the old "urn". The urn has things in it. You have to tell me what they are.

Credit: www.dystopiafunction.com


So, now imagine that.....

"Hold on, hold on. Who told you that the urn has things in it? Isn't that information already? Who told you that?"

OK, fine, good point. But you know, the urn is really just a stand-in for what we call "random variables" in probability theory. A random variable is a "thing" that can take on different states. Kind of like the urn, that you draw something from? When I draw a blue ball, say, then the "state of the urn" is blue. If I draw a red ball, then the "state of the urn" is red. So, "urn=random variable". OK?

"OK, fine, but you haven't answered my question. Who told you that there are blue and red balls in it? Who?"

You really are interrupting my explanations here. Who are you anyway? Never mind. Let me think about this. Here's the thing. When a mathematician defines a random variable, they tell you which state it can take on, and with what probability. Like: "A fair coin is a random variable with two states. Each state can be taken on with equal probability one-half." When they give you an urn, they also tell you how likely it is to get a blue or a red balls from it. They just don't tell you what you will actually get when you pull one out. 

"But is this how real systems are? That you know the alternatives before asking questions?"

All right, all right. I'm trying to teach you information theory, the way it is taught in any school you would set your foot in. I concede, when I define a random variable, then I tell you how many states it can take on, and what the probability is that you will see each of these states, when you "reach into the random variable". Let's say that this info is magically conferred upon you. Happy now?

"Not really."

OK, let's just imagine that you spend a long time with this urn, and after a while of messing with it, you do realize that:

A) This urn has balls in it.
B) From what you can tell, they are blue and red.
C) Reds occur more frequently than blues, but you're still working on what the ratio is.

Is this enough?

"At least now we're talking. Do you know that you assume a lot when you say "random variable"?

I wanted to tell you about information, and we got bogged down in this discussion about random variables instead. Really, you're getting in the way of some valuable instruction here. Could you just go away?

"You want to tell me what it means to 'know something', and you use urns, which you say are just code for random variables, and I find out that there is all this hidden information in there! Who is getting in the way of instruction here??? Just sayin'!"

....

OK.

....

All right, you're making this more difficult than I intended it to be. According to standard lore, it appears that you're allowed to assume that you know something about the things you know nothing about. Let's just call these things "common sense". And the things you don't know about the random variable are the things that go beyond common sense. The things that, unless you had performed dedicated experiments to ascertain the state of the variables, you kinda know. Like, that a coin has two sides. That's common knowledge, right?

"And urns have red and blue balls in it? What about red and green?"

You're kinda pushing it now. Shut up.

Soooo. Here we are. Excuse this outburst.  Moving on.

We have this urn. It's got red and blue balls in it. (This is common knowledge.) They could be any pairs of colors, you do realize. How much don't you know about it? 

Easily answered using our good buddy Shannon's insight. How much you don't know is quantified by the "entropy" of the urn. That's calculated from the fraction of blue balls known to be in the urn, and  the fraction of red balls in the urn. You know, these fractions that are common knowledge. So, let's say that fraction of blue is p. The fraction of red then is of course (you do the math) 1-p. And the entropy of the urn is

                                $H(X)=-p\log p-(1-p)\log(1-p)$          (1)

Now you're gonna ask me about the logarithm aren't you? Like, what base are you using?

You should. The mathematical logarithm function needs a basis. Without it, its value is undefined. But given the base, the entropy function defined above gets more than just a value: it gets units. So, for example, if the base is 2, then the units are "bits". If the base is e, then the units are "nats". We are mostly going to be using bits, so base 2 it is.

"In part 1 you wrote that the entropy is $\log N$, where $N$ is the number of states of the system. Are you changing definitions on me?"

I'm not, actually. I just used a special case of the entropy to get across the point that the uncertainty/entropy is additive. It was the special case where each possible state occurs equally likely. In that case, the probability $p$ is equal to $1/N$, and the above formula (1)  turns into the first one. 

But let's get back to our urn. I mean random variable. And let's try to answer the question: 

"How much is there to know (about it)? "

Assuming that we know the common knowledge stuff that the urn only has read and blue balls in it, then what we don't know is the identity of the next ball that we will draw. This drawing of balls is our experiment. We would love to be able to predict the outcome of this experiment exactly, but in order to pull off this feat, we would have to have some information about the urn. I mean, the contents of the urn. 

If we know nothing else about this urn, then the uncertainty is equal to the log of the number of possible states, as I wrote before. Because there are only red and blue balls, that would be log 2. And if the base of the log is two, then the result is $\log_2 2=1$ bit.  So, if there are red and blue balls only in an urn, then I can predict the outcome of an experiment (pulling a ball from the urn) just as well as I can predict whether a fair coin lands on heads or tails. If I correctly predict the outcome (I will be able to do this about half the time, on average) I am correct purely by chance. Information is that which allows you to make a correct prediction with accuracy better than chance, which in this case means, more than half of the time. 

"How can you do this, for the case of the fair coin, or the urn with equal numbers of red and blue balls?"

Well, you can't unless you cheat. I should say, the case of the urn and of the fair coin are somewhat different. For the fair coin, I could use the knowledge of the state of the coin before flipping, and the forces acting on it during the flip, to calculate how it is going to land, at least approximately. This is a sophisticated way to use extra information to make predictions (the information here is the initial condition of the coin) but something akin to that has been used by a bunch of physics grad students to predict the outcome of casino roulette in the late 70s. (And incidentally I know a bunch of them!)

The coin is different from the urn because for the urn, you won't be able to get any "extraneous" information. But suppose the urn has blue and red balls in unequal proportions. If you knew what these proportions were [the $p$ and $1-p$ in Eq. (1) above] then you could reduce the uncertainty of 1 bit to $H(X)$. A priori (that is, before performing any measurements on the probability distribution of blue and read balls), the distribution is of course given by $p=1/2$, which is what you have to assume in the absence of information. That means your uncertainty is 1 bit. But keep in mind (from part 1: The Eye of the Beholder)  that it is only one bit because you have decided that the color of the ball (blue or red) is what you are interested in predicting.

If you start drawing balls from the urn (and then replacing them, and noting down the result, of course) you would be able to estimate $p$ from the frequencies of blue and red balls. So, for example, if you end up seeing 9 times as many red balls as blue balls, you should adjust your prediction strategy to "The next one will be red". And you would likely be right about 90% of the time, quite a bit better than the 50/50 prior.

"So what you are telling me, is that the entropy formula (1) assumes a whole lot of things, such as that you already know to expect a bunch of things, namely what the possible alternatives of the measurement are, and even what the frequency distribution is, which you can really only know if you have divine inspiration, or else made a ton of measurements!"

Yes, dear reader, that's what I'm telling you. You already come equipped with some information (your common sense) and if you can predict with accuracy better than chance (because somebody told you the $p$ and it is not one half), then you have some more info. And yes, most people won't tell you that. But if you want to know about information, you first need to know.... what it is that you already know.

Part 3: Everything is Conditional

Sunday, May 19, 2013

Where do thinking machines come from?

We've been waiting for these thinking machines for a long time now. We've read about them, and seen them in countless movies. They are just technology, right? And we've gotten really good at this technology thing! But where are the machines?

In a previous post I've hinted at the big problem in serious Artificial Intelligence (AI) research: if the theory of consciousness based on the concept of integrated information is right, then thinking machines are essentially undesignable. 

Mind you, we do have smart machines. We have machines that outperform humans in playing chess, we have self-driving cars that process close to 1Gbit per second of data, and we have machines that can beat pretty much anybody at Jeopardy! But neither you, or I, would call these smart machines intelligent. We do not take that word lightly: if you're just good at doing one particular job, then you're smart at that, but you are not intelligent. Google's car cannot play chess (nor can Watson), and neither Big Blue nor Watson should be allowed behind the wheel of a car. 

What's going on here? 

Here's the most important thing you need to know about what it takes to be intelligent. You have to be able to create worlds inside your brain. Literally. You have to be able to imagine worlds, and you have to be able to examine these worlds. Walk around in them, linger. 

This is important because you live in this world, the one you are also imagining. This world is complex, it is dangerous, and it is often unpredictable. It is precisely this unpredictability that is dangerous: you can be lunch if you don't understand the tell-tale signs of the lurking tiger. 

Yes I know, your chances of being eaten by a tiger are fairly low, but I'm not talking about today: I'm talking about the time when we (as a species) "grew up", that is, when we came down from the trees and ventured into the open fields of the savannah. To survive in this world, we have to make accurate predictions about the future state of the world. (Not just in the next five minutes, but also on the scale of months, seasons, years.)

How do we make these predictions? Why, we imagine the world, and in our minds imagine what happens. These imaginings, juxtaposed with the things that really do happen, allow us to hone a very important skill: we can represent an abstract version of the world in our heads, and use it to understand it. Understanding means removing surprises, the things that usually kill you.

Thinking about an object thus means creating an abstract representation of this object in your head, and playing around with it. If you can't do that, then you cannot think. You cannot be intelligent.

Are workers in the field of Artificial Intelligence oblivious about this absolutely crucial, essential aspect of intelligence?

Absolutely not. They are perfectly aware of it. In the heydays of AI research, that's pretty much all people did: they tried to cram as many facts about the real world into a computer's memory as they could. This, by the way, is still pretty much the way Watson is programmed, but he has a smarter retrieval system than what was possible in those days, based on Bayesian inference. 

But in the end, the programmers had to give up. No matter how much information they crammed into these brains, this information was not integrated: it did not produce an impression of the object that allowed the machine to make new inferences about the object that were not already programmed in. But that is precisely what is needed: your model of the world has to be good enough so that (when thinking about it)  you can make predictions about things you didn't already know.

So what did AI researchers do? Some gave up, and left the field. Others decided that they could do without these pesky imagined worlds. That you could create intelligence without representation. (The linked article is available beyond the paywall all over the internet, for example here. Tells you something about paywalls.) NOTE: This was available until recently! Also tells you something about paywalls. 

Given all that I just told you, you ought to at least be baffled. It all seemed so convincing! You can do without internal models? How?

The idea that you could do away with representations for the purpose of Artificial Intelligence is due to Rodney Brooks, then Professor of Robotics at MIT. Brooks is no slouch, mind you. His work has influenced a generation of roboticists. But he decided that robots should not make plans, because, well, the best laid plans, you know....

Rather Brooks argued that robots should react to the world. The world already contains all the complexity! Why not use that? Why program something that you have direct access to?

Why indeed? Brooks was quite successful with this approach, creating reactive robots with a subsumption architecture. Reactive robots are indeed robust: they can act appropriately given the current state of the world, because they take the world seriously: the world is all they have. 

But I think we can all agree that these robots, agile as they are, won't ever be intelligent. They won't be able to make plans. Because plans require good internal models, which we don't know how to program.  

So where will our intelligent machines come from? 

The avid reader of Spherical Harmonics (should such a person actually exist), already knows the answer to this question. Evolution is the tool to create the undesignable! If you can't program it, evolve it! After all, that's where we came from.

Now, I've hinted at this before: evolve it! Can you actually evolve representations? 

Yeah, we can, and we've shown it. And there is a paper that just came out in the journal Neural Computation that documents it. That's right, you've been reading a blog post that is an advertisement for  a journal article that is behind a pay wall! 

Relax, there is a version of the article on the AdamiLab web site. Or go get it from arxiv.org here

Now back to the specifics: "You've evolved representations, you say? Prove it!"

Ah! Now, a can of worms opens.  How can you show that any evolved anything actually represents the world inside its.... bits? What are representations anyway? Can you measure them?

Now here's a good question. It's the question the empiricist asks, when he is entangled in a philosophical discussion. And lo and behold, the concept of representation is a big one in the field of philosophy. Countless articles have been written about it. I'm not going to review them here. I have this sneaking suspicion that I am, again, engaged in writing an overly long blog post. If you're into this sort of thing (reading about philosophy, as opposed to writing overly long blog posts), you can read about philosophers talking about representation here, for example. Or here. I could go on. 

Philosophers have defined "representation" as something that "stands in" for the real thing. Something we would call a model. So we're all on the same wavelength here. But can you measure it? What we have done in the article I'm blogging about, is to propose an information-theoretic measure for how much a brain represents. And then we evolve a brain that must represent to win, and measure that thing we call representation. But then we go one better: we also measure what it is that these brains represent.

We literally measure what these brains are thinking about when they make their predictions. 

How do we do that? So, first of all, we understand that when you represent something, then this something must be information. Your model of the world is a compressed representation of the world, compressed into the essential bits only. But importantly, you're not allowed to get those bits from looking at the world. Staring at it, if you will. If you have a model of the world, you can have that model with your eyes closed. And ears. All sensors. Because if you could not, you would just be a reactive machine. So, a representation is those bits of the worlds that you can't see in your sensors. Can you measure that?

Hell yes! Claude Shannon, that genius of geniuses, taught us how! Here is the informational Venn diagram between the world (W), the sensors (S) that see the world (they represent it, albeit in a trivial manner),  and the Brain (B):



What we call "representation" (R) is the information that the brain knows about the world (information shared between W and B) given the sensor states (S). "Given", in the language of information theory, means that these states (the sensor states) do not contribute to your uncertainty. It also means that the "given" states do not contribute to the information (shared entropy) between W and B. That's why the "intersection triangle" between W, B, and S does not contribute to R: we have to subtract it because it also belongs to S. (I will talk about these concepts in more detail in part 2 of my "What is Information? series) So, R is what the brain knows about the world without sneaking a peek at what the world currently looks like in the sensors. It is what you truly know.

Now that we have defined representation quantitatively (so that we can measure it), how does it evolve?

Splendidly, as you may have surmised. To test this, we designed a task (that a simulated agent must solve) that requires building a model of the task, in your brain. This task is relatively simple: you are a machine that catches blocks. Blocks rain down from the sky (falling diagonally) but there are two kinds of blocks in the world. Small ones (that you should definitely catch) and large ones (that you should definitely avoid). To make things interesting, your vision is kind of shoddy. You have a blind spot in the middle of your retina, so that a big block may look like a small bock (and vice versa), for a while.




In this image, a large block is falling diagonally to the left. This is a tough nut to crack for our agent, because he hasn't even seen it. He is moving in the right direction (perhaps by chance) but once the block appears in the agent's sensors, he has to make a decision quickly. You have to determine both size, direction of motion, and relative location (is the block to my left? right above me? to my right?) You have to integrate several informational streams in order to "recognize" what you are dealing with. And the agent'a actions will tell us whether he has "understood" what it is what he is dealing with. That's what makes this task cool.

We can in fact evolve agents that solve this task perfectly, that is, they determine the right move for each of the 80 possible scenarios. Why 80? Well the falling block can be in 20 different positions at the top row. It can be small or large. It can fall to the left or to the right: 20 x 2 x 2 = 80. You say that I'm neglecting the 20 possible relative positions of the catcher? No I'm not: because the game "wraps" in the horizontal direction. Then if the block falls off the screen from the left, it reappears, as if by magic, on the left. The agent also reappears on the left/right if he disappears on the right/left.  As a consequence, we only have to count the 20 relative positions between falling block and catching agent.

As the agents become more proficient at catching (and avoiding) blocks, our measure R increases steadily. But not only can we measure how much of this world is represented in the agent's brain, we can literally figure out what they are thinking about!

Is this magic?

Not at all, it is information theory. The way we do this, is by defining a few (binary) concepts that we think may be important for the agent, such as:

Is the block to my left or to my right?
Is the block moving left or right?
Is the block currently triggering one of my sensors?
Is the block large or small?

Granted, the world itself can be in 1,600 different possible states. (Yes, we counted). These 4 concepts only cover two to the power of 4, or 16 possible states. But we believe that the agent may want to think about these four concepts in order to come to a decision; that these are essential concepts in this task. 

Of course, we may be wrong.

But we can measure which of the twelve neurons encode each of the four concepts, and we can even determine the time when they have become adapted to this feature. So, do the agents pay attention to these four concepts as they learn how to make a living in this world?

Not exactly, actually. That would be too simple. These concepts are important to a bunch of neurons, to be sure. But it is not like a single neuron evolves to pay attention to "big or small" while another tells the agent whether the brick is moving left or right. Rather, these concepts are "smeared" across a bunch of neurons, and there is synergy between concepts. Synergy means that if two (or more) neurons are encoding a concept together synergistically, then together they have more information about it then summing up the information that each one has by itself. 

So what does all of this teach us?

It means (and of course I'm biased here), that we have learned a great deal about representation here. We can measure how much a brain represents about its world within its states information-theoretically, and we can (with some astute guessing) even spy on what concepts the brain uses to make decisions. We can even see these concepts form as the brain is processing the information. At the first time step, the brain is pretty much clueless: what it sees can lead to anything. After the second time step, it can rule out a bunch of different scenarios, and as time goes by, the idea of what the agent is looking at forms. It is a hazy picture at first, for sure. But as more and more information is integrated, the point in time arrives where the agent's mental image is crystal clear: THIS is what I'm dealing with, and this is why I move THAT way. 

It is but a small step, for sure. Do brains really work like this? Can we measure representation in real biological brains? Figure out what an organism thinks about, and how decisions are made? 

If any of our information theory is correct, it is just a matter of technology to get the kind of data that will provide answers to these questions. That technology is far from trivial. In order to determine what we know about the brains that we evolve, we have to have the time series of neuronal firing (000010100010 etc) for all neurons, for a considerable amount of time (such as, the entire history of experiencing all 80 experimental conditions).  That's fine for our simple little world, but it not at all OK for any realistic system. Obtaining this type of resolution for animals is almost completely unheard of. Daniel Wagenaar (formerly at Caltech and now at the University of Cincinnati) can do this for 400 neurons in the ganglion of the medicinal leech. Yes, the thing seen on the left. Don't judge, it has very big neurons!

And, we are hoping to use Daniel's data to peer into the leech's brain, see what it is thinking about. We expect that food and mating are the variables we find. Not very original, I know. But wouldn't that be a new world? Not only can we measure how much a brain represents, we can also see what it is representing! As long as we have any idea about what the concepts could be that the animals are thinking about, that is. 

I do understand, from watching current politics, that this may be impossible for humans. But yet, we are undeterred! 

Article reference: L. Marstaller, A. Hintze, and C. Adami. (2013). The evolution of representation is simple cognitive networks. Neural Computation 25:2079-2107.