Eqs

Tuesday, December 10, 2024

Can an AI agent make you play nice?

Internet veterans are aware of Betteridge's Law of Headlines, according to which the question in the title of the post should be answered with a resounding "No". Well, I'm here to be the rebel! I'll try to convince you that the answer is in fact Yes!

What does "playing nice" even mean? Well, the word "play" implies games, so we are talking about game theory here, and "playing nice" therefore means "to cooperate". I've been writing about how to understand the ubiquity of cooperation in the biosphere several times in this blog (see here and here) because people often (mistakingly) argue that cooperative behavior cannot evolve (meaning, evolve via Darwinian mechanisms) because evolution is selfish and cooperation is not.  Well, that's just not true. Cooperation without communication cannot evolve, but when communication is possible, then it's a no-brainer. I will write about this in more detail when the relevant article on that topic comes out (in the meantime, you can read this letter that foreshadows that theory). Needless to say, we do not know of any cooperation in the biosphere that does not involve a form of communication. 

But "playing nice" also means something specific within society, in particular when it comes to maintaining a public good, or a resource, that is finite. Everyone knows that maintaining such public goods (such as water reserves, forests, Chilean seabass, or, you know, a livable climate) requires a certain amount of discipline by its users: it is possible to abuse the resource by overuse, overfishing, deforestation, etc. to make a short term profit, but forgoing the use of the resource for future generations. Commonly, this dilemma—the tension between short term profits and long-term deficits—is referred to as the "tragedy of the commons", a term popularized by the ecologist Garrett Hardin. While Hardin popularized the concept, I should not be remiss to note that he was also completely wrong about some aspects of his analysis, as pointed out, for example, by Elinor Ostrom who coined the concept of "public land". For example, Hardin often warned of overpopulation from a frankly xenophobic point of view, even as we now know that Earth is never going to face an overpopulation problem (but rather the opposite).

In evolutionary game theory (EGT) we can study the tragedy of the commons theoretically and in simulations. Specifically, we can construct a game with \(k+1\) players (the "+1" is the "focal player" whose decision to cooperate or defect we would like to influence, and the \(k\) others are "peripheral" players that are in a group with the focal player). All \(k+1\) players can voluntarily "invest" into a common good, and this good is then amplified by a factor \(r\) (think of the growth of a forest, or fishery, when it is appropriately maintained using the investment). The resulting "capital" is then distributed equally to all members of the group, including those that did not invest into the public good. Because of this, the rational strategy in this game is not to cooperate, but rather to profit from the public good without having paid a cost. I'm sure you've seen such behavior, haven't you?

Illustration of the Public Goods Game with k=4 (5 players). In this illustration, three of the players contribute one token, and the total contributed is amplified to 15 because the synergy factor here is 5. 

Mathematically, you can show that if the synergy factor \(r\) is high enough (namely larger than the group size), then it becomes advantageous to invest even if no one else does. But you can quickly convince yourself that such synergy factors are illusory even in small groups of, say 5. Never mind millions. The question then is: what can we do to make cooperation profitable even when synergies are below the critical value, that is \(r<k+1\)? 

You might think that punishing defectors would be a strategy to entice players to cooperate, but as we have previously shown, that doesn't really work. (I really should have done a blog post about this work, which has some fascinating analogies to critical phenomena in condensed matter systems). What happens when you introduce punishment is that you create a system in which transitions (from cooperation to defection and vice versa) become metastable. But it does not move the barrier to cooperation (reduce the value of the critical \(r\)).

So what can be done? In a recent preprint, my colleague (and steadfast collaborator in all things game theory, Arend Hintze at Falun University in Sweden) and I have looked at what would happen if some of the players in a group are not human at all, but rather AI controlled agents. When we use the term, you should not think of some sort of Skynet-controlled robot, but in general imagine an automaton that makes decisions based on an algorithm. This is not altogether far-fetched: self-driving cars, for example, are either here already (see Waymo) or eternally being promised to be around the corner (by some other company).  It is possible to envision dilemmas that can occur when human drivers are interacting with such agents. For example, there are common situations at intersections where courtesy can lead to smooth flowing traffic, when selfish behavior can create traffic bottlenecks. Being courteous could cost time to the courteous driver or AI, but will profit everyone in the long run.  Can the presence of AI agents influence how a human driver might behave? Can they lower the barrier to cooperation?

You are probably thinking "That must depend on the AI's programming", and you would be right. We looked at three scenarios on how and what controls the AI agent's behavior:

1. Institutionally proscribed cooperation: All AI agents are programmed to always be cooperating, no matter what the human players do. Because companies might not want to take that "hit", this would have to be enforced by regulations.

2. Player-controlled probabilistic cooperation: The probability that the AI agent cooperates is controlled by the player themselves. For example, a player might force all the AI agents in their neighborhood to cooperate, as this would allow them to "rip them off" (the typical temptation to defect). 

3. AI agent mimics player. In this scenario, the AI agent observes the player agent, and "copies" that behavior. Thus, if the player is being mean, the AI agent is being mean, whereas if the player cooperates, the agent will do so as well. You can see this strategy as a form of "Tit-for-Tat", if you will. 

Arend simulated all three scenarios (I did the math, that's kind of our peculiar synergy) with populations of 5 players in a group, in a well-mixed populations (so that neighborhoods change every generation) with a variable fraction of AI agents \(\rho_A\) within each group (\(0\leq \rho_A\leq1\)).

Here is what he found:

In scenario 1, the overall levels of cooperation increased as a function of increasing \(\rho_A\), but the critical \(r\) was unaffected: the barrier to cooperation was not lowered for the human player. 

In scenario 2, all human players quickly found out that they should force the AI agents to cooperate unconditionally (dumb cooperators), but this did not lower the barrier to cooperation either, it remained at \(r_c=k+1\), just as theory predicted.

Scenario 3, however, was different. The larger \(\rho_A\), the lower the value of \(r\) at which the population transitioned to cooperation, as we can see in the plot below. 

Fraction of cooperating players \(p_C\) as a function of synergy factor \(r\), in groups of 5, for different values of the AI agent density \(\rho_A\). The curve to the very right has \(\rho_A=0\) which implies a transition at \(r=k+1=5\). The curves to the left are obtained with increasing \(\rho_A\). 

Well, that smells like success! Increase the fraction of mimicking AI agents, and the human agents are coerced to cooperate at much lower synergy factors. In fact, we can rev up the math engine (meaning, my head) to calculate the predicted critical synergy in this scenario (the \(r\) at which the blue curves in the figure above cross 0.5). Math says:

\(r_c=\frac{k+1}{\rho_A k +1}\)                   (1)

This is a very simple formula, and it came as a bit of a surprise that the dependence on the number of cooperators in the group actually canceled. So, how does this prediction fare against simulations? You can see that in the figure below.

Critical synergy (crossover points in the previous figure) as a function of the agent density \(\rho_A\). The blue-to-green crosses are obtained from simulations (and connected to guide the eye), while theory (Equation (1)) is given by the dashed line.

In conclusion,  AI agents that mimic player behavior can in fact influence player behavior (while this did not happen for scenarios 1 and 2). This strategy appears to be very simple, but keep in mind mimicking human behavior is not trivial, since humans don't carry around a placard that identifies them as a cooperator or a defector (in biology, this does sometimes happen, and the evolutionary dynamics that describes this is known as the "green beard effect"). Thus, an AI agent may have to learn how to interpret a human player's actions and infer whether they are a cooperator (and therefore cooperate with them) or a defector (and thus not cooperate with them). But in any case, they have to obtain a sufficient amount of information in order to make that decision, and this can be complicated in real-world situations. 

So what have learned here? Turns out AI agents can make you play nice! Now, in a better world, we would not need such devices to coerce recalcitrant players. After all, there is a rule "treat others as you would like to be treated", and this golden rule would certainly give rise to universal cooperation. But our world is not perfect, but rather is saturated with cheating defectors that are perfectly happy to take advantage of public goods for short personal gains. So some of us need to be "shown the way", and it turns out that surrounding them with mimicking AI agents could just do the trick! If only that would work in all of our social interactions. 

Of course there are limitations to our approach when trying to translate to actual human populations interacting with technology, and they are described in the paper: 

Arend Hintze and Christoph Adami, Promoting Cooperation in the Public Goods Game using Artificial Intelligent Agents, arXiv:2412.05450

However, we believe the general trends that we have studied would likely carry over, as the dynamics (in the end) are very simple. 
















Monday, October 14, 2024

What gets your attention? Brain evolution suggests a new theory

 Are you sure you know what's going on in your brain? Are you really the master of your thoughts?


"Of course I am!" you exhort me for asking, "Who else would be in control?" 


I won't actually be discussing multiple personalities in this post. What I will ask instead is: "How do you make decisions? What is the basis of these decisions? Can you always trust what you base your decisions on?"


A more provocative question would be: "How easily are you fooled?" Or more precisely,


"What happens in your brain when you are fooled?"


Everybody knows we get fooled a lot. A prime example of this tomfoolery is optical illusions. Many have played around with these before, but if you never have, let me suggest a couple of sites that are really good.


Michael Bach: Visual Phenomena & Optical Illusions


Gizmodo: Optical Illusions that might break your mind


In particular, Michael Bach's page explains some of the cognitive science of perception that underlies the illusion. But basically, what it boils down to is that it is not your eyes that see, it is your brain.


What this implies is that your brain makes a lot of assumptions about what it is that it perceives, and the reason it makes these assumptions is that making these assumptions saves a lot of time and energy.


I'll give you an example (not from vision). Suppose you have reached your car, put your hand in your pocket, and find that the pocket is empty. The place you are sure the key was, does not actually hold it. What do you do now?


If the key is not in your pocket, it could be anywhere, right? But do you start checking possible locations randomly? Of course not. You first try the other pocket. Why? Because this is the next most likely location? Why do you think that?


The answer is that this is guided by experience. You know about where keys are likely to be. If they are not in any of your pockets, they still must be on the desk. If that's not the case, you check on the counter. And so on. Experience has created a model of "where keys usually are to be found" in your head, and you are using this model to drastically reduce the time it takes you to find the key. You are not conducting a random search here.


Your visual system operates on a similar premise, except not only does it take into account past experience of visual scenes, but there are evolutionarily hardwired assumptions at work here too. "Chairs are usually on the floor. Windows separate the inside from the outside. Objects that are close appear to be larger." These are just a few of the things your brain "knows" about visual scenes. In fact, it is known that when we look at a scene, we barely take in anything of what is out there. Instead, we make most of it up: we hallucinate it. It works really well because most of the stuff in visual scenes is always the same. Chairs are always on the floor. Windows always separate the inside from the outside. Our eyes will saccade over the scene to perform a few spot checks, to fill in the stuff that is difficult to synthesize. Is it day or night? What kind of a chair is it? What color? We need to look directly at an object to notice its color, because we can only see color in a narrow cone surrounding the point we focus: everything that is outside of about 5 degrees of your focus is grey-scale (see below).


Central visual field as described in the XKCD comic. 

From https://xkcd.com/1080/ (licensed under CC Attribution-NonCommercial 2.5 License)


It is because of all these assumptions that our brain makes (in order to save time) that we are so easily fooled. We are fooled precisely when these assumptions are subtly violated.


So now you understand that you can make decisions based on what you thought you saw (even when you did not actually see it), but how exactly are you being fooled? What did your brain actually do when it suggested falsehoods to you?

You can imagine that this is difficult to figure out in practice, because of two reasons. 1.) People tend to not like to have electrodes stuck in their brain to record their thoughts and 2.) Even if we did that, we may not be able to figure out what really happened in the brain just from those recordings. (Because, duh, we have a hard time interpreting those recordings.) So how can we then test hypotheses about how we reach decisions, in particular faulty ones?

Let me indulge in a quick interlude about the importance of models in science. In areas other than brain/cognitive sciences, we often create models of the process we are interested in, and then test different hypotheses within that model. If the model makes predictions that are similar to what you observe in real life, then you have.... a model that at least does not contradict real life. You still don't know what's going on in real life, of course. But you can put the model through its paces. You can test it in many different scenarios. If you cannot get a discrepancy, as hard as you may try, then maybe your model actually incorporates something that is quite similar to what you have in real life. And I can already sense your question:
 
 "Could you do this with brains? Can you make brain models that behave just like human brains?"

So you are really asking is, can you make Artificial Intelligence? Well, up to now, nobody has succeeded. And you, who has been reading this blog since the very beginning (or at least since I wrote "Your Conscious You", haven't you?) already know how I'm going to answer this. We can make it alright, but we can't design it, because we don't know how brains work, really. Instead, we're going to use the process that already did the "making" at least once: Darwinian evolution. And we can do this because evolution will produce something that works whether we understand that working thing or not. So in this post, I'm going to tell you about an experiment where we evolved brains to solve a task that humans can fairly easily solve in the lab. And then, we'll test whether these artificial brains pass some other tests that are the equivalent of, say, a "brain-permit". (If you do not pass this, you can't be called a brain.) And after the brains that we make pass this test, we're going to try to fool them. We will then find that they get fooled just as easily as humans get fooled, but we can look into these brains and figure out how and why they are fooled. And from that we learn a lot about how our brain works. And it is all in a paper that you'll get to read, of course. (I'm not being very original here, I know. There's always a paper.)


What kind of brain should we evolve? You may think that because I write so much about vision that I'll evolve a brain that is fooled by visual illusions. But there isn't anyone I know that does experiments with such illusions here at MSU. When I was at Caltech, my colleague and friend Christof Koch was doing psychophysics experiments all the time, and I probably could have used some of his data sets for this purpose. At MSU, it turns out that there is a lab that also focuses on psychophysics in a way, by studying how people perceive sound and music: the "Timing, Perception, and Action" Lab, led by Devin McAuley in the Department of Psychology. Devin has been doing some really interesting experiments on how people perceive rhythmic sequences. When I called him up to discuss a possible collaboration, he told me about this really interesting experiment he did with people. And when I say "people", I mean undergrad students that were paid $10 per hour. Shamefully, this is better than federal minimum wage.

The task is really simple: You put on headphones and listen to a repeating beep. It repeats rhythmically. At one point (you are not told when) a beep occurs that is distinguished by being at a higher frequency. You are then asked whether this beep is longer or shorter than the background beeps this "oddball beep" was embedded in. You are exposed to longer and shorter oddball tones equally, and you think "This really isn't all that hard!" People are indeed pretty good at this task, as long as the difference between the background and the oddball is noticeable. And that is actually the "brain-permit" part I was mentioning. There is a psychometric relationship called "Weber's Law" that describes a subject's (or for that matter, any detector's) ability to perceive relative differences. The main idea of this law (sometimes called Weber-Fechner Law) is that "relative differential sensitivity remains the same regardless of size of stimulus". Basically, this means that the measurable difference is proportional to how strong the signal is. You already have an intuitive understanding of this law: you can perceive small differences in loudness when things are whispered, but such differences are not noticeable when you are standing next to enormous speakers at a concert.

You can verify this law by testing how well people perceive the difference between the oddball length and the standard tone (within which the oddball is embedded). The longer the standard tone, the bigger the just-noticeable-difference. "Just noticeable difference" (or "JND") is actually a technical term. It is the difference at which half the subjects deem the oddball short, and the other half deem it long.

I know I know. I'm boring you with psychometric laws when all you want to hear is how you can't be fooled. But psychometry is important, people. The word itself pretty much means "measuring the mental state". So let's leave psychometry for a moment (until I will tell you that the brains we evolve pass this test with flying colors, that is). And now let's go right into the "fooling" part.

You see, when you were taking the oddball test (I don't really think it was you specifically, I just put you into the narrative like this, for effect), so when you were taking the test, sometimes something happened that you weren't told about. Most of the time, the oddball tone started exactly when the standard tone would start, that is, they were "in rhythm". But sometimes, the oddball starts a little late, or a little early. And you were not told that. And it turns out that how you judge the length of the oddball depends strongly on this time difference.

If you mull this over a bit, it really shouldn't come as a big surprise that timing is very important in how we perceive the world The world, after all, is full of periodic (or rhythmic) signals, and our brains are attuned to detecting such rhythmic signals, because they allow us to predict the world a little better. Accurate prediction (generally speaking) is a recipe for survival, so you can imagine how it is important we get that right. To a large extent, this sensitivity to repeating auditory sequences explains our affinity to music, as I wrote about in an entirely different context.

Here's what happens when subjects (this includes you) are asked to judge advanced or delayed tones: delayed tones are judged long (even if they are short), and advanced tones are judged short. Here's the data that Devin McAuley obtained in his lab:

Duration distortion factor as a function of oddball delay, from Ref. [1].


In this figure, you see three groups of trials: when the oddball was delivered early (first column), when it was delivered on time (middle column), and when it appeared late (third column). The y-axis is the "duration distortion factor", which measures how "wrong" the reported oddball length was. The DDF is one if the oddball appears on time, which means that there is no distortion. But if it is early, the perceived length is short (DDF<1), while late onset creates a DDF>1. However, the subjects were never given longer or shorter oddballs in these experiments: they were all the same length as the standard tone!

So how is this important, you ask?

Well, get this: When the subjects self-report a longer or shorter tone, they are mistaken! They are fooled!

How can you explain this illusion? Well, there are theories that try to explain this effect. One theory posits that the brain measures time intervals using an internal clock that, in a way, pours "time units" into place that can measure how many units are in there. The start of the tone opens the gate and the end of the tone closes it, so that the amount of time units determines the length of the signal. I can sense that you're not buying this theory, and indeed that theory has not stood the test of time. Another theory, called "Dynamic Attending Theory" (or DAT) assumes that our attention is driven by changes, and that a rhythmic signal will produce peaks of attention at the beginning of the rhythm. When a tone is late or early, it gets less attention (because it misses the attentional peak), and that explains the illusion of longer and shorter tones (when in fact they are the same length as the background).

Indeed, it was Devin McAuley who tested the "time units bucket" theory (actually named "Scalar Expectancy Theory, or SET) against DAT using those undergrad cohorts, and showed that DAT came out ahead. But of course, it's still a theory. How can we figure out what's really going on?


One was is to evolve brains in the computer to do this same task, and then take a look if they are fooled by those malicious early or late tones. And if they are, we can find out why, because we can measure the heck out of those artificial brains without violating any IACUC rules. I will spare you the details of how we evolve artificial brains in the computer. I wrote about it before in this post, and I'll probably write about it more extensively in the future. We call these artificial brains "Markov Brains", and there is a write-up on arXiv that tells you most of what you need to know about them. Here, just imagine we can evolve a lot of them. For this study, we "made" 50 brains (each is the best brain evolved in fifty independent experiments). These are relatively simple brains: they have 14 neurons that they can use to perform computations, along with a single neuron that perceives the tone, and a single neuron that signals the action, as shown in the figure below.

Structure of a Markov brain that listens to a tone, and signals the duration of the tone with a binary decision.

The brains get high fitness (that is, their genome will have many copies in the next generation) if they can correctly judge the length of an "oddball" tone that is embedded within a rhythmic sequence of tones, as shown in the figure below.



The "oddball paradigm" asks subjects to judge the length of the oddball tone (in red) with respect to a rhythmic sequence of tones (in grey). The oddball tone to be judged is indicated to the subject by, for example, an elevated pitch. During training, the subjects always hear tones that begin at the exact time the rhythmic signal is expected. But during testing, the subject may be given tones that are advanced or delayed with respect to the expected onset (without revealing that manipulation to the subject).


When we evolved brains to judge the length of oddball tones, they solved the problem in less than 2,000 generations. Not all the brains could do this perfectly, but most could. We evolved them to excel at this task not for one example sequence (like the one shown in the figure above). We did this for many different background rhythms (defined by the time between onsets of the background rhythm, known as the inter-onset interval (IOI). For each IOI (between 10 and 25 units) we created a standard tone (half the IOI if the IOI is even, otherwise half of IOI minus 1) as well as all possible longer and shorter tones that fit within the IOI, and asked those brains to judge all of them. 

We found that the brains we evolved could judge the short IOIs without any problems, but found that the task becomes progressively more difficult the longer the IOI. This is in fact precisely what Weber's Law predicts, and indeed our evolved brains followed Weber's Law almost precisely! But what about the illusions observed in the experiments with students? How do the evolved brains react to delayed and advanced tones? It turns out that they are fooled in the exact same way as the students are! Below you can see the measured duration distortion factor (DDF) for an IOI of 14 time units, with a standard tone of length 7. This corresponds to the experiment above where the standard tone was 350 msec if you take a time unit to correspond to 50 msec. 

Duration distortion factor for evolved Markov brains exposed to a standard tone of 7 time units, embedded in an IOI of 14 time units. From [4].

Evolved Markov brains also perceive delayed tones (negative oddball onset) as short, and advanced tones (positive oddball onset) as long. What accounts for this illusion? How is this even possible, since these brains are deterministic? Now, compared to human brains, we have a distinct advantage here: we can peek into these brains to find out how they work!


One way to do this is to understand how the brain's state is changing as it is listening to the tone. Here, the brain's state can simply be rendered as a decimal number composed from the binary state of the combined neurons. So, for example, for a brain with 10 neurons that brain state where all 10 neurons are quiescent (the state `0000000000') is '0', while the state of all ten neurons firing is the state '1023'. We can then depict a state change by drawing an arrow between two state, depending on whether a time (a '1') or no tone (a '0') was perceived, as in the figure below.


Brain state change as a function of the input (digit next to the arrow). Top: a ten-neuron brain in binary notation. Bottom the same state change, but in decimal notation.

In the movie below you can follow the brain state changes as an evolved brain listens to a standard tone of length 5. Note the loop in state space that has evolved to make the length assessment possible. This loop is in fact an evolved representation of the standard tone.



In that movie, you see the state changes as the brain listens to a standard tone of length 5 embedded in an IOI of length 10. This brain has a total of 14 (of 16) neurons participating in the computation, but only 12 are used to depict the state (one receives the tone, one signals the decision). The oddball tone is depicted at the bottom in green. Note that the decision (here 'L', which stands for "same length as standard or longer" since the decision must be binary) is rendered at the end of the very end of the IO
I, in the transition from state 359 to state 3,911.

The movie below shows the transitions in the same brain for a signal that is long (six units), but is advanced by two units. Due to its advancement, it ends at the same exact time that a short tone would have ended, and indeed because the brain does not pay attention to the beginning of the tone, it ends up in precisely the same state as it would have ended up in if it had listened to a short tone. Because of that lack of attention, it issues the "S" determination with full confidence, but is completely wrong. 





Why does the brain not pay attention to the beginning of the tone? According to the SAT theory of attention, both the beginning and the  end of the tone represent a contrast that the brain should be paying attention to. In hindsight, however, this makes perfect sense. These brains never experienced out-of-rhythm tones (during evolution) and so focus only on the end of the tone, as this is the only place where there is expected variation! From an information-theoretic perspective, there is no entropy at the beginning of the tone, so information can only be gathered from the end! In other words, the brain only pays attention to the potentially informative parts of the signal, and not to those aspects that are always the same.


An information-theoretic analysis of all 50 evolved brains indeed bears this out. Do we then have to completely change our theories of attention?


In the realm of visual attention, one of the common theories of what gets our attention is the "visual saliency" model of Itti and Koch [2]. In this model, attention is attracted to parts of the visual stimulus that are visually salient, that is, they stand out from their background. This is similar to the DAT theory of auditory attention, and it may very well be that this is only part of the story of visual attention. Indeed, we have some evidence that our eye saccades are drawn not only by saliency, but also by our expectation of where the relevant information is to be found. 


A strong indication that what we expect to find plays a crucial role in what we pay attention to is an experiment run by Lawrence Stark. He recorded the "scan path" of human subjects when saccading an image of the famous "Rubin vase" (the black-and-white image that can be seen either as a vase or as two faces in profile looking at each other). When priming the subject with a Rubin vase adorned in such a manner that it is clearly recognizable as one or the other image (see image below) the scan path of subjects follows that of the expected image, even though the subjects were looking at the unadorned image [3]


Two images of the "Rubin vase" adorned with markers that suggest one or the other image. They were used to "prime" subjects who had their scan path measured in Ref. [3], but the image they actually saw when their eye saccade path was scanned was the image without adornment. Image from [5].

We can thus assume that attention is driven by multiple mechanisms: a "top-down" mechanism driven by high-contrast features, as well as by a "bottom-up" mechanism where attention is driven by what the brain expects to experience. What it is that we expect to experience we may not always be conscious of, so you may think you know what your brain is paying attention to, but don't be surprised if you can easily be fooled!


References:


[1] J. D. McAuley and E.K. Fromboluti, "Attentional entrainment and perceived event duration", Phil. Trans. Roy. Soc. B 369 (2014) 20130401.

[2] L.W. Stark, C.M. Privitera, H. Yang, M. Azzariti, Y.F. Ho, T.T. Blackmon, and D. Chernyak (2001). "Representation of human vision in the brain: How does human perception recognize images?" Journal of Electronic Imaging 10, 123–151.

[3] L. Itti and C. Koch (2001). "Computational modelling of visual attention". Nature Reviews Neuroscience, 2, 194–203. 

[4] A. Tehrani-Saleh, J.D. McAuley, and C. Adami (2004), "Mechanism of duration perception in artificial brains suggests new model of attentional entrainment" (2024). Neural Computation 36, 2170-2200.

[5] C. Adami, "How brains perceive the world" (2024). Artificial Life 30