Wednesday, 4 July 2012

Paul Cisek: The Vanishing Central Executive: Distributed Neural Mechanisms for Decision-Making


      Abstract: Modern theories of the brain describe it as a series of information processing stages for perceiving and representing the world, thinking about it, and then acting upon it. However, this intuitively appealing and influential view is not well-supported by neurophysiological data. Sensory, cognitive, and motor functions appear distributed through diverse brain regions and often mixed within the activity of individual neurons. As an alternative, I will describe a model based on theories from ethology, which suggests that behavior involves a continuous competition between potential ways to interact with the world. I will present recent results supporting some of the key predictions of this alternative way of looking at how the brain implements behavior, focusing on neurophysiological studies of decisions between actions.

      Cisek, P. (1999) Beyond the computer metaphor: Behaviour as interaction. Journal of Consciousness Studies. 6(11-12): 125-142. http://www.cisek.org/pavel/Pubs/Cis1999.pdf
      Cisek, P. and Kalaska, J.F. (2010) Neural mechanisms for interacting with a world full of action choices. Annual Review of Neuroscience. 33: 269-298.http://www.cisek.org/pavel/Pubs/CisekKalaska2010.pdf

Comments invited

36 comments:

  1. I'm not sure I understood the distinction between « classical representations » and « pragmatic representations ». Any details about that issue ?

    ReplyDelete
    Replies
    1. Descriptive representations are explicit and objective representations of the world (the fact that a car is a car). Pragmatic reprsentations are implicit and subjective representations of the world and are active in the interaction between an individual and the world.

      Delete
    2. I think, if I understand correctly, the biggest distinction is that classical representations are "objective," in that we represent the outside world separate from our interactions with it, e.g., we can perceive a bottle of water regardless of whether or not we're thirsty. Cisek is suggesting that the sensorimotor system is directly involved in decision making, and as such, our representations are related to our subjective state and specific environmental affordances.

      Delete
    3. I take it to be along the lines of what Gallagher calls "traditional representations" and "minimal representations" (among which you find Millikan's "pushmepullyu representations" and Clark's "action-oriented representations"). http://philpapers.org/rec/GALAMR

      Delete
    4. Right - you've all understood what I was getting at. I'm not so familiar with Gallagher but what I call pragmatic representations are pretty much what Millikan and Clark were talking about. Some might say that they're not "representations" at all, but I think the terms are useful because they allow us to think about such things lying on a continuum.

      Furthermore, they let us ask how descriptive representations might have evolved from pragmatic ones. For example, a very simple organism might have only a pragmatic representation of a food source, confounded by its state of hunger, because that's what it needs to select useful actions in the here-and-now. But a more advanced animal that has the capacity to refer back to prior experience would benefit from specializing at least some of its representations to be free of that state-dependence - yielding what Tolman would call a cognitive map. Of course many animals (all of whose brains are evolutionary "works-in-progress") might have something in-between.

      Delete
    5. I find it very interesting that “pragmatic” representations, which specify action selection and specification, are reminiscent of what a phenomenologist would call “presentification”. Pragmatic representations flesh out the notion underpinning much of phenomenology, according to which meaning stems first and foremost from interaction with, and coping with, the external environment. I am in complete agreement with Doctor Cisek’s assessment of the situation, and coming from phenomenology I find this work very refreshing.

      Delete
    6. Thanks Max! Mike Shadlen and others keep telling me that I need to read Merleau-Ponty because all of that is in there. I'm sure that is true - this is an old idea that has appeared in many forms.

      Delete
  2. I get the impression Cisek strawmaned computationnalism to make for a convenient ennemy, but computationnalism could totally be built on top. After all, we do think serially at some level – I certainly can't process language from two sources at the same time. You can have parallel, affordance-competition processing at one level, and serial symbol processing at the other.

    ReplyDelete
    Replies
    1. I would like to understand your point better (It will help me understand Cisek's ideas). Can you expand on your example of processing two languages simultaneously. How does this represent serial processing?

      Delete
    2. I can't read from two places at a time. If I put music while I'm, it needs to have little or no speech. In fact, if I'm interrupted in my speech, I might also have trouble going back to where I was. This might indicate that when it comes to higher order symbolic functions like speech and rational thinking, we could be processing ideas one at a time (serially) – otherwise we'd be able to process two or more at a time (in parallel).
      As a matter of fact, I'm not sure it is the case, but people will build computationnalism on top of dynamic and/or embodied models (e.g. Susan Schneider in philosophy and Chris Eliasmith in computer science).

      Delete
    3. @Moka - But isn't the fact that you can't read two sources at the same time a consquence of reading requiring conscious attention? And conscious attention cannot operate in paralle.

      Delete
    4. My point is not to deny that serial processing exists, but to argue against using it as the foundation for our theories of behavior and the brain. From comparative neuroanatomy we know that the basic architecture of the vertebrate nervous system was laid down >100 million years ago, and largely conserved since then, even in humans. That architecture could not have been adapted to serve the needs of abilities that did not exist at that time. So the examples you give are important and nobody wants to deny them, but I don't think we should build our fundamental theory of neural organization around them.

      Furthermore, even to understand recent abilities (which may be well described by serial models) it is important to first think about the context within which they evolved. If I were to guess, I'd speculate that such abilities are specializations of the selection system(s). In fact, I think it's interesting that the mechanisms some have proposed for serial sequencing of action selection (see Bullock's 2004 TICS review on competitive queuing models) have an "if-then" structure that could be elaborated toward something like the production systems of classic AI. So the mechanisms of reasoning could just be extensions of earlier and less abstract selection mechanisms. This is very speculative, of course, but it makes some neurophysiological predictions that could be tested.

      To summarize - I think it's useful to look at evolution (the "descent with modification" bit) as a roadmap for building our theories.

      Delete
  3. I feel like this is a somewhat naive question, but still:
    Cisek's proposal is that the brain entertains 'all potential actions' at once. Yet there needs to be some limiting of the infinite number of potential actions to begin with. How does that happen?
    Of course this question can apply to any decision-making model, since any decision involves potential actions...

    ReplyDelete
    Replies
    1. The infinite number of potential actions are limited by physical constraints and the goals of the action. Think of the zebra running away from the lion towards the river... is there more than two potential actions by the zebra in this case?

      Delete
    2. If you want to push the question a bit and get a more processorial, less intuitive answer, I'd say: he mentionned affordances. Affordances is the idea that the features you notice and retain in objects are the ones which suggest its usage (e.g. a handle is a "to-be-twisted") – so it's a function of both the object and the organism (and its goals). There are psychological and mathematical models for that...

      Delete
    3. It's a great question, and I think it has three mutually compatible answers. The first is Martha's - that the number of potential actions is already limited by the physical constraints of the environment at any given moment. Second, I think it's not so much a matter of the *number* of actions but of the resolution of the parameter space in which they are represented. Studies of reaching suggest that once potential movements are within 60 degrees, they start to get mixed (Ghez et al. 1997, EBR), so perhaps only about 6 planar reaching movements could be represented at a time, each as a "hill" of activity that is fairly broad. Once more degrees of freedom enter the picture, the numbers get larger, but it's never so large as to cause a computational explosion. Finally, selection should be operating throughout the sensorimotor loop, reducing the options as they go from "targets on retina" to "directions for moving my right hand". Many studies have shown that things get progressively more sparse as you record further and further down the dorsal stream.

      Delete
  4. Sort of a follow-up on Mokawi's post: I think Ciszek makes a good case that the model of decision-making often associated with computationalism (where general action planning always occur before specific action selection) might be seriously wrong. But it seems to me that this decision-making model is not intrinsically related to computationalism and that he might be a little bit too quick to take this as general objection to computationalism. Many computationalists are only committed to the view that propositional attitudes are relation to syntactically-structured representations (as opposed to distributed representations, the sort that connectionnists are fond of). They can still allow for the existence and causal power of 'pragmatic representations' -- provided those are not identified with propositional attitudes and provided that we also have syntactically-structured representations that have a causal role.

    ReplyDelete
    Replies
    1. Actually, my ideas on decision-making are not the basis of my general objections to serial computationalism - it's the other way around. I think the problems with computationalism go far beyond decision mechanisms and have more to do with that general input/output picture of behavior that (I think) we inherited from dualism. An animal is a control system, or a *special case* of an input/output system in which the output influences the input. Computers don't do that (usually). So to describe the brain as an input/output system without considering its control loop structure is a bit like describing cars as energy conversion devices without adding something that distinguishes them from chloroplasts.

      So it's my objection to computationalism as a theory of behavior which leads me to think about "decision-making" from the perspective of sensorimotor control.

      But as I replied to Mokawi above, I don't deny that some things are serial, and that some descriptive representations are useful. I just don't think it's a good foundation and should not be as pervasive in the brain sciences as it often seems to be.

      Delete
  5. I thought these were some great ideas. I was wondering if Dr. Cisek could respond as to how he thinks this model plays into not only decision making but parallel perception strategies, a la Dennett's Marilyn Monroe room: we walk into a room plastered with 1000 Marilyn Monroes and can near-immediately identify all as the same without having to inspect each one.
    How do we get by in such a scenario without a central executive?

    ReplyDelete
  6. You look at one and you have a Marilyn Monroe feeling. You keep looking, no different feeling. You quickly reach, by induction, the conclusion that they are all Marilyn Monroe... even if there is a Groucho Marx in there. If you happen to see the Groucho Marx, you might revise your conclusion and look more closely at the other 999. Why would you need a central exec in your Central Nervous System? The CNS IS your central exec.

    ReplyDelete
    Replies
    1. That's a great answer. I couldn't have said it better myself. This is also (I suspect) what Mike Shadlen would say: We interrogate the world, not re-create it inside the head and then look at it again.

      Delete
  7. Ok.. now keep an open mind...

    Prof. Cisek's specification processes occuring in parallel with the selection processes and Prof. Morsella's conflict-resolution processes got me thinking about this hypothesis about the function of feeling. Let me know what you think.

    Just as the problem of explaining how the brain works is an underdetermined problem, the problem faced by the brain to build a model of the world is also an underdetermined problem. Now, attentional processes are usually defined as selection processes where some information are choosed while the rest is dropped (or inhibited). Thus, whenever a model selected proves to be unsatisfactory, a new model has to be built.

    Now... imagine the evolution advantage that would represent the possibility to build AND MAINTAIN many different models of the world at the same time. Whenever one doesn't work out well, another one is immediatly accesible. However, if you could attend to all models at once, potential actions would most probably conflict. You therefore would need a mechanism by which you could consider only one of them (without dropping any of the other models). Wouldn't then feeling happen to be a great way of dealing with that problem.

    Let's say at time t, you have models of the world A, B, C and D. Now if you should use attentional processes in order to select model A, models B, C and D would be lost.

    How could they be all selected, all simulated in a temporal frame, but would not conflict with each other for behavior? Feeling appears to be a great solution for this.

    If this hypothesis should be correct, it would describe the function of feeling. The mechanism by which that would be made possible would remain to be postulated.

    You could see in this an analogy with the "many-worlds" interpretation of quantum mechanics, where many alternative histories and futures are real.

    Now I said at the beginning to keep an open mind! Given finding a function for "feeling" that could have been advantageous through evolution appears so difficult, this "many-world models" hypothesis is something to consider! How else than feeling could we attend to a world model WHILE representing and simulating every other possible ones (or at least some other models given physiological constraints...)!


    Etienne Dumesnil

    ReplyDelete
    Replies
    1. Interesting idea, and you might be right. But myself, I'm not convinced that there *is* such a thing as a "model of the world". Perhaps all there are are conjectures about the world, like perceptual categories, propositions/beliefs, etc. Each of these might, as you say, have multiple versions that collapse into one through competition. Indeed, perceptual categorization seems to work through competition between cell clusters arriving at a single winner. But is "feeling" the mechanism that resolves the competition? I don't know. I would say the feeling only comes after the competition is resolved, which is why it seems unitary. Maybe all that enters Baars' "Theatre of Consciousness" are the winning conjectures about the world.

      Delete
    2. Why would it feel like something for a system to encode and select among multiple conflicting hypotheses?

      Delete
  8. I might be wrong, but it seems that there is a bias to study things in the positive form. Action selection instead of action avoidance. Targets instead of obstacles. Rewards instead of punishments. Is there something special about the negative form? For example, are obstacles also coded as potential actions and later through selection mechanisms inhibited, or obstacles never reach a potential action representation?

    ReplyDelete
    Replies
    1. Good point! I guess that to a certain extent, we could consider obstacles as being targets that have to be avoided, just as action avoidance includes all those actions that got inhibited for action selection. But I think the same as you: studying the sun doesn't tell us that much about the moon, as avoiding an action might really be implying different neural mechanisms than selecting one.

      Delete
    2. This is not intended as a definitive answer by any means but I can make a few speculative comments on the basis of what I know about affordances as the basic representational medium in the brain and the neurobiology of reward and punishment.

      The first thing to know is that the classic version of the affordance typically takes on a certain level of abstraction, but that the brain is not limited to this level. Specifically, 'affordances' have generally been described as corresponding to a discrete whole object (a cup, say, or a hammer, pencil, etc etc) and a corresponding sequence of motor behaviors. There are two important -- and closely-related -- things that are left out in such examples. The first is an implicit feature of the affordance idea in general that is not often explicitly articulated, namely, that the association or linking of the behavior pattern with a particular object is fundamentally made on motivational grounds. for example, we commonly say, when discussing affordances, that a cup is 'for' grasping, and to say this is to frame the cup's affordance on a motor-behavioral level. However, on a slightly more 'abstract' level, the cup can be equally fairly said to be 'for' drinking from; and by extension, 'for' quenching thirst (at least, when it contains a potable liquid).

      Note that these descriptions -- all equally accurate -- apply to various descriptive levels of the object; starting from the level of a sequence of motor behaviors, we can work our way up to descriptions that correspond principally to motivated actions, and all the way up to goal states. There is a lot of evidence for this, and perhaps the best example comes from psychology experiments that investigate the relationship of motivational state to perception and attention. For example, when you are thirsty, you will locate a cup (or other thirst-quenching-related object) in a visually crowded field significantly faster. Same goes with food or eating-related (e.g. utensils, plates, etc) during states of hunger. Additionally, this effect generally scales with the degree of thirst / hunger in question. This demonstrates that the visual system is, at least partially, interpreting the identities of visual stimuli along motivational grounds. The implied adaptive utility of the brain's use of affordance coding is exactly this ability to facilitate the transformation of perceptual information into behavioral and motivational information, which, as we learned from Paul's talk, is something that needs to happen continuously in real-time.

      The second implicit problem with common examples of 'affordances' is that they are typically locked to a particular level of object identity. Importantly, this tendency is a reflection of our perceptual biases (and our linguistic description of these), rather than any actual limitation on the brain in terms of what representational level an affordance can operate on. In other words, my point is that affordances do not exist strictly on the object level, but can extend 'lower down' to associations between individual perceptual features and movement directions, as well as 'higher up' to social categories and complex types of motivated behavior.

      Delete
    3. Evidence for this on two ends of this continuum comes from two rather different areas of research. On the lower-order end, there is substantial neurobiological work that has convincingly established that visual stimuli are simultaneously represented as percepts AND action sequences even on the level of simple visual features such as location, color, etc. For more information on this, look up the "theory of event coding" (TEC) by Prinz et al. To summarize, though, the principal goal of this theory is to account for how we can make simple visual-feature-to-motor-response associations, such as you might during an experimental task during which you have to report the appearance of a green stimulus of variable shape and size with a rightward arm movement, and a red stimulus with a leftward movement. In this context, the simple visual features of color become associated with a directional motor behavior, rather than some unitary whole object or tool. To reiterate, this is important because it shows that the essential definition of an 'affordance,' as a perceptual representation specifying sensory AND behavioral information simultaneously, is not limited to the object-level but can apply to lower-order visual features and simple movement parameters as well.

      Contrasting examples of affordances scaling up to more abstract levels of description come from social psychology, specifically studies of behavioral modulation by visual priming. The big name to look for here is John Bargh and his work on automaticity and social priming. To quickly take two examples, you can subliminally flash objects associated with categories of social behavior, such as a hammer or gun (for priming the behavioral category of 'violence') or a briefcase or person in a business suit (to prime the goals of dominant or competitive behavior colloquially associated with 'business' in general). This is important for the affordance concept because it demonstrates that objects can specify behaviors in more abstract terms than simple chains of discrete motor behaviors -- rather, they can 'suggest' abstract categories of social behavior to human observers properly equipped (cognitively, and by socialization) to perceive them.

      Now we can (finally) come back to your original comment about 'negative affordances.' I would suggest, on the basis of the above, that affordances can indeed be 'negative', in the sense that a lion could 'afford' a the behavior of withdrawal by associating the percept of lion with the behavioral parameter of "move away ASAP." Alternatively, this association could be made at higher or lower levels of abstraction; maybe you will connect the withdrawal behavior parameter with teeth, or anything that roars, or to all objects falling within the category of 'predatory felines,' etc etc. Such a behavioral association is possible because there is nothing intrinsic to the notion of affordance-based perception that prohibits them from being 'negative' in the sense of specifying withdrawal behavior, because the brain's ability to tie behavioral parameters to particular patterns of sensory input is not limited to approach behaviors.

      Lastly, in the specific context of Paul's talk, it's important not to neglect the centrally located "biasing" mechanisms. The ability of the brain to flexibly apply motivational valence to environmental affordances in a contextually-sensitive manner allows for different contextual responses to otherwise identical affordances. To take a slightly silly example, if I put an electrified mug in front of you, you could 'bias' your selection systems against selecting a grasping movement by biasing the mug affordance with an aversive motivational signal.

      That's my two (hundred?) cents. Paul -- what do you think?

      Delete
    4. Thanks for the two hundred cents! My question was not so much if there are negative affordances. I agree with you on that, but it was really nice to read your take on it. Do you know of any Cog Neuro study that compares positive and negative action affordances?

      Delete
    5. Thanks for a great question! Regarding obstacles, one way to think about them is as valleys in the activity map. Suppose the population activity in the sensorimotor system is something like a probability density function over the space of possible movements. If your world is empty, then the function is flat but not zero. Competition in a uniform map is balanced and stable, so nothing happens. If an attractive object appears then there's a peak of activity which corresponds to the actions that bring you to it. Because of competition, that peak pushes the rest of the activity in the map down a bit - so the attractive object is indirectly (through lateral inhibition) suppressing other actions. Obstacles cause such suppression directly, through inhibitory inputs. So if you see a cluttered scene with a few attractive objects and a few obstacles, the resulting map is a complex landscape, and movement results by tending toward the peaks, moving along ridges, etc. This is related to the attracting/repulsing fields of Kurt Lewin in the 40's and to what Baldauf and Deubel (2010) called an "attentional landscape". It is also similar to some algorithms in robotics.

      There's some evidence for this kind of thing in the motor system in the psychophysical studies of Mel Goodale and Jody Culham, and some evidence from Steve Scott that corticospinal reflexes get tuned up by obstacles so that they are "repulsing". I'm not aware of direct neurophysiological evidence for this, but all that I've seen is consistent. For example, activity in PMd appears to be normalized across the population - so that on average increases in some cells are accompanied by decreases in others, just like you'd expect from a probability density function. That's also (though only partially) seen in parietal cortex.

      So yes, there do appear to be negative affordances in the sensorimotor system, and this probably extends to higher levels as well.

      There's also a lot of research on negative valence. One great recent example is the research by Okihide Hikosaka on the habenula, and on how it participates in learning.

      Delete
    6. Matt: Indeed a lot of cents! Maybe even three hundred! :)

      One point you mention that I'd like to emphasize some more is the idea that perception of affordances is in some ways primary to perception of "objects". What our brain needs most (for survival) is to detect what the world affords and demands: There are opportunities for eating, opportunities for hiding under, and demands for running away (you could call these last ones "negatively-valenced affordances"). Anyway, it just so happens that many of these tend to stay stuck together. For example, you might have affordances for eating and for throwing and for keeping papers from flying away, all at the same time, and observe that they tend to stick together. This might lead you to devise the concept of an object that we, by convention, call an "apple". In other words, our conceptual categories might be shorthand notation for packages of affordances, and I think this is where the meaning of words comes from.

      Delete
    7. This discussion of definition by affordance has got me wondering: Are different affordance (or even context)-based considerations of the same object (e.g. apple for eating vs. apple for throwing) represented differently in the brain? I assume there's going to be some difference regarding the possible motor activity (the action specification, as Dr. Cisek refers to it) associated with each one (arm movements for throwing vs. eating), but would there also be distinct categorical representations for "things that can be thrown", or "things that can be eaten", or distinct goal-based representations (e.g. need to deter threat, need to eat) for different objects? This issue also strikes me as relevant to the "abstract rule"-selectivity that prefrontal neurons demonstrate during associative learning, though in this case the rules are independent of any of the qualities of the stimulus (see Assad, Miller, and Rainer, 1998). I also think it’s interesting that a hallmark of intelligence is the ability, in both cases (feature-inherent or arbitrary/symbolic) to easily recognize (e.g. tool or rule identification) and flexibly shift between affordances.

      Delete
  9. This comment has been removed by the author.

    ReplyDelete
  10. Xavier Dery ‏@XavierDery

    #TuringC : Has someone been counting the number of references to William James? Kidding aside, we do need to pay our intellectual debts.

    2:37 PM - 4 Jul 12 via Twitter for Android

    ReplyDelete
  11. Xavier Dery ‏@XavierDery

    Cisek talk: The brain makes decisions when the parallel, interactive action of its different systems reach a distributed concensus #TuringC

    2:53 PM - 4 Jul 12 via Twitter for Android

    ReplyDelete
  12. Looking back on all the lectures, Dr. Cisek's talk stands out. I've often though, through the Summer Institute, that perhaps our current scientific paradigm does not have to tools to answer the "hard" question about consciousness. Although Dr. Cisek's contributions are more about 'doing' than 'feeling, IMHO he takes a step in the right direction by breaking away from the standard information processing framework.

    ReplyDelete