Saturday 7 July 2012

Axel Cleeremans: Consciousness and Learning


      Abstract: Here, starting from the fact that neural activity is intrinsically unconscious, I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of action on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of metarepresentations that characterize and qualify their target representations. Such re-representations form the basis of conscious experience, and also subtend successful control of action. In a sense thus, this is the enactive perspective, but turned both inwards and further outwards. Consciousness amounts to 'signal detection on the mind'; it is the brain's (non-conceptual, embodied, implicit) theory about itself. By this hypothesis, which I call the "radical plasticity thesis", consciousness critically depends on a cognitive system's ability to learn about (1) the effects of its actions on the environment, (2) the effects of its actions on other agents, and on (3) the effects of activity in one cerebral region on other cerebral regions.

      Cleeremans, A. (2011). The Radical Plasticity Thesis: How the brain learns to be conscious. Frontiers in Psychology, 2, 1-12. http://srsc.ulb.ac.be/axcwww/papers/pdf/07-PBR.pdf
      Timmermans, B., Schilbach, L., Pasquali, A., & Cleeremans, A. (2012).
Higher-order thoughts in action: Consciousness as an unconscious redescription process Philosophical Transactions of the Royal Society B
      Pasquali, A., Timmermans, B., & Cleeremans, A.(2010).
Know thyself: Metacognitive networks and measures of consciousness

Comments invited

45 comments:

  1. What and how strongly you feel is a matter of degree. But whether you feel (anything at all) is not: It is all-or-none.

    ReplyDelete
    Replies
    1. Just to make your point clear : Do you think "feeling" is an operative concept?

      Delete
    2. So can we say that the blind-sight subject we saw in the video avoiding obstacles is experiencing an unfelt seeing process. In other terms he avoiding the objects in the same way a robot would do it , it's an unfelt doing. That's why he claim he can't see anything, because the feeling process is unreached for the perceptual modality

      Delete
    3. The blind-sight patient, unlike a robot, feels *something* while he is walking, he just does not see anything. But since he does not bump into anything, his brain must be detecting optical imput, and transmitting it to the locomotor system. (The patient may nevertheless feel a motor urge or inclination to move right instead of left...)

      Delete
  2. I was sort of unconvinced by Cleeremans' argument that we sometimes have graded conscious states. He used two types of data to infer his conclusion: (1) The subjects' self-report about some how it felt to perform some task; (2) The subjects' performance on that task. But this does really entail the existence of graded conscious states. Regarding (1): there are many cases where we are fooled by our intuitions about our own mental states (the work of Haggard comes to mind here, where he argued that sometimes we feel most in control of our actions when we are less in control). Moreover, it might be the case that subjects confuse the fact that some of their fully conscious states are very quickly forgotten with their being only partially conscious. Regarding (2): The linear change in performance might be explained without positing graded conscious states. It might be the case, for instance, that conscious performance and unconscious performance use more or less the same mechanism and that this mechanism is simply better when we exposed to the image for a longer time.

    ReplyDelete
    Replies
    1. All good points, the question of whether consciousness involves a sharp, dichotomous crossing of a threshold vs. more linear, gradual changes as a function of stimulus quality is a challenging question, but one that I see as having fundamental implications for our understanding of the mechanisms of consciousness (and which certainly distinguishes between two current theories -- Dehaene's NWT and Lamme's Recurrrent processing hypothesis).

      I am struck, essentially, by the fact that conscious experience has a consistently graded character for me. At any moment, I am aware of many different things and in many different degrees -- from vague far-away street noises to the visual impression of looking at my computer screen to a vague pain in one of my feet; it all forms a unified, richly structured and graded phenomenal field. Contrast this with the rather all-or-none experience of detecting a single stimulus on a computer screen in a darkened room. These are completely different experiences.

      The trouble with gradedness is that you can get gradedness out of the combination of graded stuff or out the combination of binary elements (e.g.,vinyl vs. CDs). Kouider explicitly developed the latter position with the notion of partial (vs. graded) awareness.

      Delete
    2. Perhpas Dr. Cleermans choose to check the twitter feed in the middle of discussion to illustrate that descent into a different grade of consciousness is possible

      Delete
  3. I found your talk very interesting and it made me think about the reflexion I had a couple of days ago. In fact I was wondering what we had that robots didn't that made us conscious. You focused on learning and plasticity, but robots can learn, they can accumulate a lot of data about their environment and use them to adapt their reponse. I guess the same thing can be done with other agent like "if I do this behavior the other robot will do that" (not a TOM but just an action-effect "understanding"). It's a lot like we do as we grow up (and as we learn through life too) we accumulate data on our world and others to forge our behaviors, but also our personnality, traits, etc. Then what do robots miss? Is it a question of capacity? We do have superpowerful computers, but are they able to integrate and process as much information as the brain does? But I'm not convinced. They are really not as flexible as we are (our hardware seems to permit a lot more than theirs, i.e plasticity?).They lack our senses so they miss a part of the environment. They don't have a personnality (at least I don't remember any example of that)? But what is truly missing?

    ReplyDelete
    Replies
    1. I completely share your point of view Marjorie.

      Delete
    2. I forgot the emotions tought, they are not wire to have emotions.

      Delete
    3. What is truly missing from contemporary and (as far as we can tell) forthcoming robots is a sense of life and death on the one hand, and individuality on the other. Any living organism is endowed with drives to reproduce and to survive; there is nothing "felt" that is comparable in robots. The further difference is that each individual organism is unique and simply cannot be cloned in terms of reproducing that organism's particular trajectory through existence. Not so for robots.

      I am not sure we want learning robots that fear death and want to reproduce, though.

      Delete
    4. Something they miss too is from my point of view a sense of their owm physical existence. Most robots we saw have some sort of actuator and a camera to perceive something from the outside world. However, Reiklitis mentioned there are some sort of proprioceptive sensors available, but so far i have seen no attempt im the field of robotics in investigate how the perception of the own body could be achieved by a robot. And even if they have multiple sensors, i am still wondering how much they are processed in a multisensory way. I would be curious to see a robot which has proprioception, somatosensation, vision, and then goes to explore the world and has to realize at some point that his body and the world is something different. It is a bit far fetched now, but i think this experience would also lead to an experience with certain aspects you mentioned now, Axel. So, we would need a robot with a Camus - like experience .. ;)

      Delete
    5. Dr. Cleeremans is correct in his description of what is missing in robotics, but I don't see how these features are necessary for consciousness, or how they fit into the radical plasticity thesis.

      Delete
    6. What robots lack is that they do not feel, they just do.

      Delete
    7. >Dr. Cleeremans is correct in his description of what is missing in robotics, but I don't see >how these features are necessary for consciousness, or how they fit into the radical >plasticity thesis.

      The idea is that to feel as we do, robots would need to assign value to things, and the only manner in which this can be achieved is through experience learning about different things so that you can prefer some over others. That makes you seek those things, avoid others, and so on. No robot today does that, for nothing ever means or does anything to them. Minimally, this would require (1) some basic drives, (2) ability to learn. That's where it fits with the RPT.

      Delete
    8. But today's robots *can* learn and they *do* have "drives" (as any servo mechanism does). I am not sure what "assigning value" means, unless you mean feeling: I seek sugar because it tastes good: that has "value" for me.

      Delete
    9. That's exactly what I mean!

      Delete
    10. So if a robot knows it's running out of battery and knows it needs to get to a power supply... does it "feel" something we might call "tired"? And is it conscious of its state?
      Would that be enough to ascribe feeling to it?

      Delete
    11. KNOWING KOANS
      @ Inge Broer

      "Knows" is alas yet another weasel word.

      A robot, like a thermostat, can detect when its power is running low, and it can act on it (re-charge), but if it doesn't feel it, then it doesn't know it.

      Actually, even if it feels its power is low, it only believes it, it doesn't know it. Maybe its power is not really low; it just feels like it.

      The only two kinds of things we can know for sure (rather than just believe with high probability) are necessary truths like (1) 2 + 2 = 4 or not(P & not-P) and (2) that we are feeling whatever it feels like we are feeling while we are feeling it (Descartes' Cogito, on certainty).

      If you want to know why and how believing something that is in fact true, and even believing it for the right reasons is still not the same as "knowing" it, see some of the Gettier koans about knowing.

      And if you're really desperately interested in knowing more about knowing, see:

      Harnad, S. (2011) Lunch Uncertain. (Review of  Floridi, Luciano (2011): The Philosophy of Information (Oxford University Press) Times Literary Supplement 5664: 22-23.

      Delete
    12. To expand on Prof Harnad's point about "knowing":

      If one takes "knowing" as implying certainty, then most organisms that we construe as able to know things don't actually know anything. For instance, most mammalian species have to navigate complex environments. To do so requires processing information about that environment. A rabbit needing to avoid predators needs not determine that path A is certain to be free of predators, only that path A has the lowest probability of causing a run-in with a predator. Numerous organisms make those kinds of decisions very often, and that type of process need not involve "feeling" in any sense (although emotional reactions might offer a reliable heuristic for making those decisions, but that's only relevant if we equate emotion with "feeling").

      I assume a similar story could be told about the rabbit (or robot) "knowing" that it is tired.

      So a robot does not need to "know" that it is running low on power (even if it could).

      Delete
    13. THE NEED TO KNOW
      @Frédéric Banville

      But even to believe it is running low on power, the robot needs to feel. Otherwise it is simply a servo that can detect (and correct) when it is running low on power.

      Delete
    14. @ Stevan Harnad

      Of course, but I don't see how the rabbit in my example has to believe anything. The robot, likewise, doesn't need to believe that it is running low on power...?

      Delete
    15. KNOW-HOW: UNFELT & FELT

      @Frédéric Banville

      All the robot or the rabbit "needs" is doing, and that's the problem.

      Neither believing (nor, a fortiori, knowing) are needed for doing. And that, once again, is the problem.

      My point was that "knowing" (and believing) area weasel-words, insofar as consciousness (feeling) is concerned: Unconscious know-how is not knowing, it's just know-how, as in " a vacuum cleaner knows how to suck up dust.

      The difference between the rabbit (and today's sub-Turing) robots is that the rabbit has know-how and also feels. The robot (and the teapot, and the vacuum-cleaner) only have know-how (doing).

      Delete
  4. Two fundamental things to understand mechanisms underling it that Axel Cleermans mentioned when he presented the misconceptions of consciousness : consciousness is not a single thing, and consciousness is not all-or-none!

    ReplyDelete
    Replies
    1. Both these things might warrant a little more reflection, starting with replacing the vague weasel-word "consciousness" with "feeling": There are many different feelings, qualitatively and quantitatively (and temporally), but whether an entity feels anything at all (or feels anything at all now) is an all-or-none matter, and feeling is feeling, not something else. It is the difference between organisms like us and teapots. Axel Cleeremans' findings have no bearing whatsoever on that; they are merely about detection thresholds.

      Delete
    2. COMMENTS COPIED AND PASTED FROM THE CORRESPONDING POST ON FACEBOOK :

      TURING CONSCIOUSNESS :
      "Both these things might warrant a little more reflection, starting with replacing the vague weasel-word "consciousness" with "feeling": There are many different feelings, qualitatively and quantitatively (and temporally), but whether an entity feels anything at all (or feels anything at all now) is an all-or-none matter, and feeling is feeling, not something else. It is the difference between organisms like us and teapots. Cleeremans' findings have no bearing whatsoever on that; they are merely about detection thresholds."

      FREDERIC SIMARD :
      "Double like for "consciousness is NOT all-or-none", working with monkey definitely lead me to think they are conscious, to a lesser extent than human, but still conscious... And I'm ready to extent it to dogs/cats and so on..."

      TURING CONSCIOUSNESS :
      "Being a little bit conscious is like being a little bit pregnant..."

      FREDERIC SIMARD :
      "And just like there are several stages in consciousness, there are several stages of pregnancy which manifest differently in the behavior and anatomy of the pregnant women."

      TURING CONSCIOUSNESS :
      "Distinguish the question of (1) *what* and *how much* you feel from the question of (2) *whether* you feel (anything at all). The former is a matter of quality and quantity, but the latter is all-or-none. The "hard problem" is about the latter (2), not the former (1)."

      SHADY RA :
      "Let's say that other beings (such as dogs and monkeys) feel. We accept it and we take it as granted, or we rely on subjective studies of elephants behave, etc. They feel, just as once one woman is pregnant, she just cannot be semi-pregnant: an all-or-none condition. We take it as granted.

      From that point on, how come it seems so irreconcilable to accept that consciousness exists, that it emerged at some point, but not necessarily yield the same characteristics as in human beings.

      I mean that flies perceive a different array of wavelengths, which might yield a different consciousness of the world that surrounds us. Every time I open up the faucet in my kitchen, the water flowing down will have a different stream in absolute. It might be th same for consciousness: it emerges and once emerged, it might follow a different way of "expressing" itself, with several characteristics coming from our biological existence, our brain capacity, etc."

      FREDERIC SIMARD :
      "You are right, but I guess (and I'm sure this is what motivates Dr. Kiley-Worthington) studying the various level of consciousness can lead us to understand the causation of it. Just like pregnancy, at first a baby comes out, rolling back the mother became bigger and bigger over time and rolling back again a man did something funny to her ;), but I'm not adding anything to the discussion..."

      Delete
    3. (In response to the Turing Consciousness FB comment pasted above)
      I have difficulty, in the context of this summer institute, ascribing a 'weasel-word label' to "consciousness". Certainly, one of the aims of this institute should be to better define the term "consciousness", not to do away with it. Evidently, not everyone agrees that there is a one-to-one correspondence between feeling and consciousness.

      That is not to say that I do not think we should investigate the merits of the association. As a heuristic, the association is obviously potent. It enables, as Dr Harnad demonstrated above, to frame research questions i.e. decide whether to investigate (1) or (2).

      My own intuition, however, is that answering (2) will come from findings in (1). If (1) addresses "how much you feel", as Dr. Cleermans has done, then it deals with 'feeling' on a spectrum. I do not see why this is problematic (or unsatisfying). Presumably, a research project that investigates the extremes of the feeling spectrum (through questions that fall in the domain of (1) proper) has the potential of uncovering truths about (2).

      Delete
  5. I find it amazing the study Dr. Cleereman mentionned about the size of the visual illusion having a systematic relationship with retinotopy and size of V1. This is a good example where the signal from the stimulus is the same but the percept is different among individuals in quantitatively measurable ways. By correlating the reported percept with actual functional organization of the cortex, the evidence indicates, albeit indirectly, that the neural correlates can reside in visual cortex as early as V1.

    ReplyDelete
    Replies
    1. Also amazing as an example of how a physical limitation, how close neurons can get to one another, makes a difference in our subjective experience.

      Delete
    2. True! It makes me think about Necker's cube: the stimulus keeps being the same and the perceptual feeling that you get from it switches all the time between two states. I know that Crick and Koch's theory proposes that neuronal networks (a network including all those neurons firing at the same time for certain characteristics of the stimulus) compete to each other in order to drive a given conscious perceptual feeling. And the Necker's cube somewhat offers online a phenomenological insight onto the competition between who will have the conscious experience.

      Delete
  6. Yes, this is Schwarzkopf, D.S., Song, C. & Rees, G. (2011). The surface area of human V1 predicts the subjective experience of object size, Nat Neurosci.

    ReplyDelete
  7. Awesome talk, and Dr. Cleermans did a phenomenal job of making his arguments accessible to even a skeptical biologist.
    I was wondering if Dr. Cleermans might have any advice for Kiley-Worthington on how she might design objective experiments to examine whether her Equines and Elephantidae merely have knowledge 'in' their system or have knowledge 'for' their system (metarepresentations)

    ReplyDelete
  8. Does anyone find it rather ironic that every time we post here we have to "prove that you're not a robot"?

    ReplyDelete
    Replies
    1. Absolutely! And I gotta say: I don't pass the test at times!

      Delete
    2. This is a really interesting point you are bringing forward Inge. And the irony is striking. Many times as a joke I say "oh today I don't think I'd pass the Turing test..."
      Let's do a thought experiment:
      If a being that subjectively knows is conscious (a fellow human being let's say) fails the Turing test, is that being non conscious? This is kind of an "inverted chinese room" and I think it would deserve a serious thought!

      Delete
    3. TURING-TESTING, MIND-READING AND THE OTHER-MINDS PROBLEM

      The Turing test is for testing whether our causal model of how the human mind works can actually do everything that a human with a mind can do. Its only interest is that you know how it works. We don't know how people work, so there's no point (or meaning) in Turing-testing them.

      (However it is true that Turing-testing is based on the same sort of "mind-reading" that we do with one another (and with other species) all the time -- and that it is only via what an organism can do (and say) that we can judge whether it has a mind. Off-days for people with minds are just that: off-days for people with minds. I have them all the time...)

      Delete
  9. I am intrigued by the idea that consciousness stems from the brain’s constantly re-describing its own state to itself. On Doctor Cleeremans’ account, consciousness would simply be the brain’s nonconceptual theory about itself.
    While this account is certainly interesting, it falls prey to same fault as almost every theory examined in the summer school, e.g. it does not address why such recurrent processing would yield felt experience. Doctor Cleeremans’ seems to support the idea that consciousness is not all-or-none, yet IMHO it seems that a conscious sensation is either felt, or it is not.

    Doctor Cleeremans’ account makes sense for a computational, information-integration point of view, and yields am interesting functional account of consciousness, but does not address why metacognition should be felt. Indeed, this is the stumbling block for most higher-order theories of consciousness: why should a mental state being about another mental state yield conscious feeling?
    Consider the following thought experiment:
    I see a chair. I thus have a mental state about that chair. The chair itself, however, is not conscious, despite my having a mental state about that same chair. Hence, we have a case where my having a mental state about X does not make X conscious. In the same line of thinking, I don’t see why a mental state being about another mental state would make the latter conscious. Perhaps I am oversimplifying, or missing something.

    ReplyDelete
  10. Did you notice that the thermostat example, at the beginning of the talk, is a reformulation of the Chinese room by Searle (within a closer system). The thermostat can report the temperature, reacts to temperature, but in no way it has any knowledge or care about what temperature is.

    ReplyDelete
  11. Are we really conscious of being conscious at all time? (16:06)

    ReplyDelete
    Replies
    1. In Rosenthal's theory, one does not need to be conscious of being conscious to be conscious - the higher order thoughts that do the causal job of making their target representations conscious are unconscious themselves.

      Delete
    2. ...but as soon as you ask someone "are you conscious" they should be able to access the answer "yes", or in other words: they should be able to become conscious that they are conscious at any given time?

      Delete
  12. Xavier Dery ‏@XavierDery

    Cleermans presents four misconceptions about consciousness... There are current scientific studies relying on one or more of these! #TuringC

    3:20 PM - 7 Jul 12 via Twicca Twitter app

    ReplyDelete
  13. This comment has been removed by the author.

    ReplyDelete
  14. Dont forget another good way of simplifying your writing is using external resources (such as Evolution Writers ). This will defintely make your life more easier

    ReplyDelete