Monday 2 July 2012

Wayne Sossin: Aplysia: If We Understand the Cellular Mechanisms Underlying Sensation and Learning, What Do We Need Consciousness for?


      Abstract: Aplysia is a model system for defining the relationship between neuronal plasticity and behavior. Indeed, many of the circuits underlying Aplysia's simple behaviors are understood, as well as how the animal can change those circuits after experience. Moreover, it is beginning to be understood how biogenic amine and neuropeptide pathways can activate or inhibit distinct motor programs to bias the animals decisions on what to do. I will explore, given the limited number of neurons in Aplysia, whether the additional pathways are present that would lead to the complex feedback systems that are probably required for consciousness.

    Sossin WS. Defining memories by their distinct molecular traces. Trends Neurosci. 2008 Apr;31(4):170-5. Epub 2008 Mar 10. http://www.ncbi.nlm.nih.gov/pubmed/18329733  
    Cropper. Neurosignals.http://www.ncbi.nlm.nih.gov/pubmed/15004426#  2004 Jan-Apr;13(1-2):70-86. Feeding neural networks in the mollusc Aplysia.
    Evans
    Science.http://www.ncbi.nlm.nih.gov/pubmed?term=Kandel%20dialogue#  2001 Nov 2;294(5544):1030-8. The molecular biology of memory storage: a dialogue between genes and synapses.
    Feeding behavior of Aplysia: a model system for comparing cellular mechanisms of classical and operant conditioning.http://www.ncbi.nlm.nih.gov/pubmed/17142299  Baxter DA, Byrne JH. Learn Mem. 2006 Nov-Dec;13(6):669-80. Review.

Comments invited

33 comments:

  1. Wayne did a fantastic job of laying out distinct criterion for consciousness, and following this with a careful review of each criteria in his model organism- the first of all speakers to coherently do so.
    I believe, however, that he left out the criterion of agency.
    Before this point though, I do also think it possible that Aplysia could satisfy the requirements for consciousness by it having recurrent states of electrical activity. By nature of it having a circadian cycle of action and rest states, it seems as though it has some primitive form of volitional drive to feed & reproduce. It remains 'online', ready to receive sensory input, and send behaviour.
    On the sense of agency; Aplysia carries out some of their stereotyped behaviours with the 'belief' that the behaviour will affect change in their environment. Squirting ink will ward off predators, and releasing hormone will attract a mate.
    Yes, the Aplysia is a reductionist model - but I don't think anyone would disagree that our volitions and beliefs are writ (somewhere) in our circuitry. As Haggard suggested, we are just more complex marionettes with more higher-order strings acting on our Aplysia-esque strings.

    ReplyDelete
    Replies
    1. I do not understand where in the Aplysia circuit, agency appears. I do not believe Aplysia ink because of a belief that the behavior will change its environment. It inks because the circuit was programmed by evolution to ink in the presence of a noxious stimulus. I do not see a fundamental difference between this and a robot I program to ink when a touch sensor reaches a threshold.

      Delete
  2. I agree that this was a very interesting talk, and whether one agrees or disagrees with Wayne's conclusion regarding Aplysia's (lack) of consciousness, he identified in a clear and accessible manner the criterion used to substantiate his position and foster further discussion. Throughout the symposium so far, differing perspectives on a definition of consciousness has been the inadvertent root of many debates, precluding deeper discussion of its limits, putative functional roles, etc.

    ReplyDelete
  3. Dr. Sossin mentioned the point of view that some people have that consciousness is an emergent property. I was wondering if other higher functions could also be emergent properties. For example, is being able to recall an episodic memory an emergent property? Is it possible that we have memories as a "side-effect" of plasticity in the brain? The memory trace that is formed during processing/learning is reactivated when we recall a memory. Perhaps the presence of this memory trace is sufficient for a conscious memory to arise?

    ReplyDelete
    Replies
    1. This is an interesting idea, but I would say that the trace is required for a conscious memory, but not sufficient. The trace is always there, the issue is how to activate it. The memory is reactivated by association (it is hard to consciously access memory as many of you have probably found on writing exams), but then how the reactivation then gains access is an interesting issue. The gaining access part is probably not a side-effect.

      Delete
  4. Most of the commentaries following this talk were aiming aplysia's degree of freedom (its ability to make decisions vs. its pre-programmed behaviors) and/or the aplysia's ability to feel. If aplysia was conscious in any way (and maybe it is), I don't think these two features refer to a unique kind of consciousness. Maybe the ability to feel refer to a minimal kind of consciousness (1)(something like a readiness to encompass environnemental constraints) but the notion of freedom refer to the ability to take notice of it's own reactions which is a higher kind of thoughts (2). Where (1) in aplysia seems quite uncontroversial, I think we have no evidence for (2) ; and (2) is the real deal.

    ReplyDelete
    Replies
    1. I agree, but am not sure that robots could not be programmed to be ready to encompass environmental constraints. I think one of the keys of the consciousness debate is to differentiate the zombies (robots) from conscious organisms. Of course my inability to correctly rewrite the altered letters required by google to differentiate myself from a robot may limit my ability to make this argument.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Sossin argued: if we completely understand the biology of an organism's nervous system and we don't find consciousness, then consciousness isn't there. A robotic aplysia would pass the aplysia turing test! Although he might have incorrectly oversimplified aplysia in order to be useful as an example, we can still imagine the potential of being able to completely explain the biology of an organism, and not finding a correlate for consciousness. So, when we get close to completely explaining an organism (I don't think that we are there yet for aplysia), we potentially WILL find a neural correlate for consciousness.

    ReplyDelete
  7. NONHUMAN TURING TESTING

    The ideal way to scale up to the human Turing Test would be to start with evolutionarily earlier and simpler organisms.

    The trouble is that just as our robots are only toys, today, relative to everything a human can do, so our "ethogram" for what other species can do is just a toy, compared to all that they really can do. So for the first (and most important) criterion of the TT (being able to do anything the real thing can do), we have no idea whether we have reached it, or have even come close.

    And for the second criterion of the TT -- being able to do it indistinguishably from the real organism, to the real organism -- it's even easier to fool organisms into thinking something is one of them: Sometimes it just takes a prominent visual cue or the right smell.

    So we're stuck with the human TT, which draws on our powerful (human) "mind-reading" capacities (about with Professor Baron-Cohen will be talking soon).

    I do think that Wayne waffled a bit over the question of whether Aplysia really feels or merely detects and responds. (In case anybody's wondering, I would not eat one. And in case anyone didn't pick it up, the eating criterion is not just a joke. Real feelings are at issue, and if anything at all matters, feelings matter.)

    ReplyDelete
    Replies
    1. I don't know if my question is relevant. But I was wondering: If we program a robot to imitate a human responding to the TT. Could we say it is conscious? Since it only imitates a state of consciousness.

      Delete
    2. If it only imitates consciousness, it won't pass the TT. To pass TT it needs language, true language based on grounded symbols; that requires consciousness, not simulated consciousnes, but emulated consciousness. Therefore TT implies not only T3, but also T4 (i.e. not only simulated consciousness, but a simulated brain capable of consciousness).

      To be pen pal with Stevan for 40 years, you sure need to be able to get out of your immediate spatiotemporal reality, you have to be able to project your thoughts beyond what you can feel here and now. Animals can't. If TT is a consciousness test, animals are not conscious. So, Stevan can have a steak!

      Delete
    3. It may be easy to trick other organisms into thinking something is one of them, but it doesn't seem like it would be too hard to convince humans that something is truly an instance of a specific species.

      Delete
    4. Aplysia turing test is an idea that does not make any sense, not anymore than other nonhuman turing testing. I wonder why we heard about this idea so often during the school.
      In order for living entity to administer a Turing test and conclude anything about the presence of a mind in a similar entity, it needs to have a mind in the first place!
      Buidling an aplysia robot that is indistinguishable from a real aplysia (from an aplysia’s perspective) would not tell us anything about consciousness in this animal, or in any other animal.

      Delete
  8. Doctor Sossin argues that Aplysia cannot be conscious, notably because it is lacking the appropriately complex recursive and re-entrant information-processing capacities that he speculates would be required by an organism, if it were to have conscious states. Because Aplysia seems to lack such feedback mechanisms, it is argued, they cannot be conscious.

    This line of thinking presupposes that consciousness is somehow a computational property, and further that consciousness is a recursive or reflexive property that tells the system something about itself. Sossin hinted at the possibility that such a state requires the possibility of predicting the next input, and the capacity to plan ahead. While certainly, some aspects of our human consciousness have these reflexive features, I would argue that the latter are not the “mark of the felt,” if you will. It seems to me that experience does not necessarily entail a homunculus, or a subject for whom the feelings are experienced, as Sossin seems to suggest.

    Maybe some phenomenology could be of use here. (Apologies to those of you who dislike phenomenological “jazz hands.”) From a Husserlian point of view, there is no need for a homunculus, as felt experience merely has two aspects: an “ego-pole” and an “object-pole.” There is only the phenomenon of experience, and this experience has distinct features. The fact that experience seems to happen to a subject is merely a feature of our human experience, one that is probably realized by feedback and reflexive processes. However, when we speak of feeling proper, there need not be a homunculus, nor a subject “doing the feeling,” as it were. In the case of our human awareness, there is indeed a formal aspect, a kind of “mineness,” that is, the aspect of a feeling that makes it seem as if it was happening “to me.” Mineness is simply a feature of our human experience, and IMHO, it may not be a feature of the feelings of other species.

    The “mineness” of experience could very well be explained by recursive processing, as Sossin speculates, but my point is that experience need not be so structured. It could very well be that experience and “mineness” are indeed related in our own, typically human, type of experience, whereas the experience of more simple life forms may not require such a centralized representation.

    What if consciousness is not a computational property, as Doctor Harnad has argued? Is it not possible that Aplysia is apt to feel, despite the apparent simplicity of its nervous system? While Aplysia certainly is not self-conscious, there may very well be “something it is like” to be an Aplysia. I would argue that the mere fact that Aplysia’s nervous system does not seem to contain feedback loops does not argue against its having feelings.

    ReplyDelete
    Replies
    1. Great points. I think reductionism is key. In the search for the causal origins of consciousness, if we are to critically assign definitions, these crazy molluscs are the perfect place to be looking.
      For me, where the black cloud of ambiguity keeps coming up is in defining what it is to feel
      (not HOW or WHY organisms feel but WHAT it actually is). Using overt behavior as a measure of internal processing, I see no reason why Aplysia's sensory circuitry does not constitute feeling - it simply operates on a reduced biological circuit.
      Tail shock --> internal computation (the magic!) --> behavioural output
      WE feel we feel, an Aplysia just feels.
      I wonder what Skinner would think of Aplysia?

      Delete
    2. "It seems to me that experience does not necessarily entail a homunculus, or a subject for whom the feelings are experienced, as Sossin seems to suggest."

      I certainly disagree with you, as would Searle and every other scientist and philospher who would consider subjectivity to be a fundamental component of consciousness or feeling. You've stated clearly that you do not belong to this group, and believe consciousness/feeling can exist without subjectivity, so I would be interested to hear your elaboration on what you suppose it may feel like to be an Aplysia without subjectivity. This concept seems inextricable from a scenario in which an unconscious organism/system/robot operates in an environment in an entirely reactive manner, providing the appearance of awareness but lacking any subjectivity or intentionality.

      Delete
  9. Wayne asked: "If We Understand the Cellular Mechanisms Underlying Sensation and Learning, What Do We Need Consciousness for?"

    "Understanding the cellular mechanisms underlying sensation" means understanding the neural causal chain starting from an external stimulus through a sensor (physical-to-neural transducer) through a multitude of interneurons (neural-to-neural) through an effector (neural-to-physical transducer) to an external action. Makes me think of a Rube Goldberg machine.

    "Undertanding learning" means understanding how neural causal chains change under utilization. More Rube Goldberg machines.

    And Wayne tells us that this is all there is in an aplysia. However, at some point he mentionned "spontaneous firing" of some neurons. Well, there is nothing "spontaneous" in an RG machine. These neurons are firing for a reason, a causal chain needs a cause. These neurons are sensing internal conditions. They could be nocioceptors. Maybe the aplysia had a ganglionache, not to say a headache. Is that sufficient to say that it felt a ganglionache? It certainly sensed something. Were it conscious, it might interpret it as a ganglionache, but it has no interpreter. Interpreting means going one step beyond sensing, identifying the cause of the ganglionache. Aplysia doesn’t know anything about causes, it just feels the effects. Whatever applies to aplysia applies very far up the mammalian evolution chain and certainly applies to robots.

    On my poster, I wrote:
    "Consciousness is knowing that perception is a representation of reality". Of course that begs the question(s); at least three questions about 1- knowing,
    2- perception, and
    3- representation.
    Some might add reality, but aren’t we all declared materialists?

    So, let me try again: "Consciousness is feeling that our feelings are (causally) related to a physical reality".

    Second order feeling, but not a felt feeling, rather a feeling about feeling.

    I wouldn't eat an aplysia, but not for the same reason...

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. Dr. Sossin said that aplysia does not learn in the sense of forming spatial memory. But were the experiments done in such a way that the stimuli are simple enough to be differentiated by their rudimentary vision? If we present a mate/food associated with light/dark chambers (which aplysia can differentiate) instead of in a maze environment with complex spatial/visual cues, would aplysia still not learn to seek that particular context?

    ReplyDelete
    Replies
    1. The idea of using light/dark chambers (or light/dark paths in a y-maze) is a great one to potentially access the possibility of a sort of higher-order learning of the aplysia. I have no idea if it would work or not, since I am not familiar enough with the aplysia, but I can offer some speculation. We know that the aplysia can learn from experience and that it is probably sensitive to light, so if it is also sensitive to spatial location then it should work. I wonder what it could have that would help it to learn spatial location though. Without a finely tuned visual or auditory system, where would the required spatially selective neurons be getting their input? I guess it would be possible to create a mental map of the environment using only tactile sensory information - maybe a different texture on the walls of the two sides of the maze... it would definitely be an an interesting experiment! That being said, I suspect that such an experiment would not shed further light on the question of aplysia consciousness, though...

      Delete
    2. Yes, tactile stimuli may be interesting!

      Delete
  12. Sossin told in his talk that it is difficult to measure consciousness in animals separately from action. Was he talking just about motor action? If aplysia’s neural system is not sufficient for consciousness, what is the simplest animal that could have a consciousness? Could this be a point to understand the necessary and sufficient conditions for consciousness?

    ReplyDelete
    Replies
    1. I think it would amazing to know whether a given action is not emitted by the organism because of evolution, because of an S-R relationship, because..., and this without the intervention of consciousness. Particularly in W. Sossin's talk, it's very hard to understand whether aplysias have consciousness: they learn indeed, they can "behave" according to these learnings, but to what extent do these behaviours can be understood as accompanied by consciousness. Are aplysias conscious of their action? Is there anything as being conscious of moving, of eating, or whatever action aplysias do in real life.

      I think your question is very interesting, but yes, even in human, it's hard to separate consciousness from action, because even the NCC, the neural correlates of consciousness, as described by A. Shmuel's talk, are not empty of action (people think, feel, fear, etc.).

      Delete
  13. Originally posted on facebook
    ERIC MUSZYNSKI:
    Sossin: "[Aplysia have] no need for firing in the absence of inputs" - could this be a clue to our self-awareness? Our neurons are supercharged and MUST discharge even in the absence of stimulus or motor control, so it accidentally turned into self-awareness... It would then be just a spandrel of our powerful brain? No function needed to explain it.

    MARJORIE MORIN:
    It's a very interesting point. He said that consciousness may require a certain minimal brain size. If we follow your logic it might not be the size of the brain as much as how much it is "occupied" by what it has to process that could bring consciousness?

    ReplyDelete
  14. PLANNING AND CONSCIOUSNESS: Sossin claims that planning is a feature of being conscious. I strongly disagree. Planning is an additional function that consciousness allows, but an animal can be conscious of its present and yet not have any ability to plan beyond the close future.

    ReplyDelete
  15. Sossin also says that you need memory to be conscious. We are conscious because we have memory. I disagree. If I could impair my memory formation (that longer than the order of seconds), I could still be conscious of my present at every moment, even if I cannot access my conscious experiences from the past.

    ReplyDelete
    Replies
    1. The fact is, however, that every organism that we seriously consider attributing consciousness to possesses memory. As a result, any serious attempt at building a theory about consciousness cannot disregard the role of memory. Sure, HM had severe anterograde amnesia, as well as some retrograde amnesia. But he still had some remnants of working and procedural memory, which are the systems that 'feeling', or consciousness as an all or one phenomenon, are most likely to tap into. Thus, i tend to side with Dr. Sossin.

      Delete
  16. I agree with Diego! I think the example of HM was brought up in another discussion, but as was mentioned I don’t think anyone would agree that he was not conscious, although lacking long-term memory and probably the ability to plan beyond the close future.

    Also, I really enjoyed Dr. Sossin’s strong point of view, though some might consider he oversimplified things, I thought he made some great points. Furthermore, when thinking back on aplysia in comparison to other animals that have been discussed during the summer school, I was wondering what differs, that could be related to consciousness, between for example an octopus and an aplysia. Do people who consider octopi conscious, also consider, by the same criteria, aplysia conscious? Or do some people consider octopi conscious, while not attributing consciousness to aplysia? Would Dr. Sossin consider octopi conscious?

    Izabo Deschênes

    ReplyDelete
  17. I was very convinced by Dr. Sossin's definitions of animal consciousness, especially the behavioural aspect of rich discriminatory abilities. I think this was a great point which Dr. Sossin briefly touched on, without the appropriate sensory and integration abilities, an organism is unable to conscious of it's present situation. I do agree with the above discussion that planning may not be a necessary for a basic level of consciousness, that an organism may be conscious of the present without necessarily having the ability to plan beyond the near future.

    ReplyDelete
  18. Short question, but I think it is a legitimate one...
    If consciousness is a continuum, how can we give criterion for it and why would it be a black or white question?
    I am not necessarily talking about feeling there. To now, I don't think we've come up with any operationalizable criterion (or even definition that goes beyond one's own feeling) to the question of feeling

    ReplyDelete
  19. HOW MUCH YOU FEEL IS A CONTINUUM -- WHETHER YOU FEEL AT ALL IS ALL-OR-NONE

    If you prefer it in "consciousness" lingo: Mow much you are conscious of is a continuum -- whether you are conscious at all is not.

    Operationalizable? (Please see the other-minds problem and the Turing Test...)

    ReplyDelete
  20. Hard problem of consciousness solved by bridging the explanatory gap http://www.consciousnessexplained.org

    ReplyDelete