Sunday, 8 July 2012

Eva Jablonka: Evolutionary Origins of Experiencing


      Abstract: An approach focused on the evolutionary transition to experiencing -- to the first organisms with phenomenal consciousness -- “can enable the identification of fundamental organizational principles involved in experiencing. Based on the heuristics of the origin-of-life research, we outline a parallel approach to experiencing, and suggest that just as function emerged with the transition to life, felt-needs emerged with the transition to experiencing. We argue that experiencing is a facet of open-ended associative learning in neural animals with a CNS, and that the evolution of associative learning was a key factor in the metazoan diversification during the Cambrian. It endowed animals with motivation and increased their discrimination powers on the basis of systemic reward systems. Tracking the molecular and neural correlates of associative learning as they emerged during evolutionary history may therefore shed light on the dynamics that underlie elementary forms of experiencing.

      Simona Ginsburg and Eva Jablonka (2010) Experiencing: a Jamesian approach   Journal of Consciousness Studies 17:102-124.http://www.openu.ac.il/Personal_sites/download/Simona-Ginsburg/Experiencing-A-Jamesian-Approach2010.pdf    
      Simona Ginsburg and Eva Jablonka (2007) The Transition to Experiencing: I. Limited Learning and Limited Experiencing  Biological Theory. 2(3) 218-230.
      Simona Ginsburg and Eva Jablonka (2007) The Transition to Experiencing: II. The Evolution of Associative Learning Based on Feelings. Biological Theory 2(3) 231-243
      Simona Ginsburg and Eva Jablonka (2010) Associative learning: a factor in the Cambrian explosion. Journal of Theoretical Biology 266:11-20.

Comments invited

22 comments:

  1. MODES OF BEING?

    (1) "Experience" is (another) weasel-word: Is it felt experience or unfelt experience? If it's felt, it's feeling. If it's unfelt, it's a teapot (doing).

    (2) The analogy with life, vitalism, and the eventual functional and evolutionary explanation of life -- doings: causal and complete -- does not work for feeling, because with life there is nothing more, and never was anything more, that could be its "vital force" that (allegedly) made it a "hard" problem to explain life causally in the usual way (doings). With feeling there is something more, and each one of us knows exactly what it is.

    (Probably the reason vitalists were vitalists was because they were actually animists, and the "vital force" they had in mind (literally) was feeling.

    ReplyDelete
    Replies
    1. (1) I don`t think that experience is a weasel word. She defined it as feeling at the beginning of her talk. Dr. Jablonka decided to use a different word to put forward her theory.
      in your words feelers.
      (2) How do you know that if we explain all the doings that we won`t get feeling? How do you know that there is not some neural mechanism that converts information into a feeling? What we have yet to discover is that mechanism. And if I understood correctly Dr. Jablonka is interested in finding exactly that mechanism using her approach of studying very simple experiencers. Or in your words feelers.

      Delete
    2. Eva Jablonka:
      1. "Feeling" , for most people, captures mainly the affective aspect of experiencing, so we decided to use the word experiencing, which we think works better and includes both affective and perceptual aspects. Our teaching experience suggested that feeling misleads the students while experiencing does not, and is intuitively understood as phenomenal consciousness. Experiencing is not the simple unqualified equivalent of information processing. It is a very special type of information processing, just as living is a very special type of chemical reactions
      2. The analogy to vitalism is far stronger than you and Chalmers suggest. Read for example: "What the History of Vitalism Teaches Us About Consciousness and the “Hard Problem” BRIAN JONATHAN GARRETT Philosophy and Phenomenological Research Vol. LXXII, No. 3, May 2006.
      Evan Thompson (2007) has a long and very good discussion of this issue in Mind and Life (especially chapter 8-10).

      Delete
  2. I missed the logic behind how unlimited associative learning necessarily requires consciousness. Can anyone summarize?

    ReplyDelete
    Replies
    1. I am with you in not getting this point. I agree with one of the people that asked questions at the end, that it is a bit of a circular logic. Dr. Jablonka looked at which organisms COULD potentially have consciousness, put together a behavioural function these animals can do in common, ie. 'unlimited assoc. learning' as she defines it. And called that a criterion for consciousness and listed a bunch of neural correlates that could serve this function. She does stress this is her attempt at some type of criterion and that she doesn't t know if she is right... it is just a theory, a bit like the global workspace theory except that the workspace can now do learning!

      Delete
    2. Eva Jablonka
      We propose that the core property of unlimited associative learning in neural animals is the formation of rich, memory-dependent, ontogenetically-constructed, integrated sensations and coordinated actions. The relation between unlimited (that' is very flexible) associative learning (UAL) and experiencing is evolutionary: we argue that in biological organisms, the evolution of associative learning entailed experiencing because UAL involves a set of dynamic processes and organizational properties of the embodied nervous systems, that led to what we call categorizing sensory states (CSSs). That is why UAL is a good indicator of experiencing in animals (not in robots). In our papers we explain this position in some detail and point to the biological preconditions that enabled it, and the properties that are facets of UAL: binding, memory at several level, temporal synchronization, hierarchical mapping and meta-representations, compensatory and inhibitory mechanisms, embodiment, etc. (see ppt). The functions of UAL in animals, as well as the organizational dynamic properties enabling UAL are in line with what we see as the functions and basic characteristics of phenomenal consciousness (experiencing).

      Of course our suggestion is a theory – but it has interesting predictions and suggest new avenues of research, for example, identifying the physiological (not just neural…) of UAL, and experimenting with this aspect of "doing" , to mention just the most obvious ones.

      Delete
    3. CORRELATION VS CAUSATION

      Eva Jablonka lists a number of functions that are correlated with the capacity to feel in organisms:

      -- unlimited [i.e. very flexible] associative learning
      -- rich, memory-dependent, ontogenetically-constructed, integrated sensations and coordinated actions.
      -- categorizing sensory states (CSSs).
      -- binding, memory at several level, temporal synchronization, hierarchical mapping and meta-representations, compensatory and inhibitory mechanisms, embodiment, etc.


      But there is no explanation of why or how those functions cause or entail feeling in order to do what they do. So far they are just functions that are correlated with (hence predictive of) feeling:

      "in biological organisms, the evolution of associative learning entailed experiencing [feeling]" (how? why?)

      "UAL is a good indicator of experiencing in animals (not in robots)" (why in organisms and not in robots? and as we scale up toward T3 robots?)

      "functions of UAL… are in line with… the functions… of experiencing [feeling]" ("in line with" does not explain how or why: feeling that an object is getting heavier is in line with the object getting heavier, and with being able to do what we can and can't do with objects as they get heavier; but where is the explanation of the fact that the function feels like anything at all?)

      Delete
  3. On the question of the simplest type of systems that can have goal:  the identity of some kinds of systems  depends on their own activity. The identity of a (token) wave, for example, rests on how matter and energy pass through it, as with a flame, or the red spot on Jupiter. Unlike a rock,  such process structures are distinguished from their surroundings by a  characteristic pattern of activity. If the pattern ceases, so does the system.  This is a necessary but of course not a sufficient condition for a system to have a telos. Other conditions include that the system be auto-catalytic (thus unlike regular waves, but not standing waves, self perpetuating under certain conditions), but not, perhaps, that it be able to replicate or evolve. 

    When we attribute any sort of telos to the simplest living systems, we presume these are un-represented, and of course unconscious goals. We say things like "the bacteria is trying to maintain a certain concentration of x within its membrane". If we make the same sort of claim about a self-organizing process structure (such as the Bernard phenomena, the red spot, or a star) are we making a claim that is more ontologically problematic than when we do so in the case of simple living systems? Some sort of information transfer system (e.g) may also be necessary condition for a system to have a goal, but this was not made clear in the talk (perhaps due to lack of time), and it is certainly a claim that needs explicit support if we are to draw the line between non-living process structures, and those that are alive. 


    Sent from my iPad

    ReplyDelete
    Replies
    1. Eva Jablonka
      I did not indeed have time to discuss the complex issue of teleological systems. However, there is no doubt that self-organization is not the same as teleological processes at least not according to the way most philosophers of biology think. A very good book about this is: Juarrero, A. (1999). Dynamics in Action: Intentional Behavior as a Complex System. Cambridge, MA: MIT Press.
      See also: Thompson Evan 2007 Mind in Life, Harvard University Press.

      Delete
    2. GOALS: FELT AND UNFELT

      A thermostat has a "goal": Keep the temperature at 22 degrees.

      A cold organism has a felt goal: Do something to make you feel warmer.

      Goal is a weasel-word, insofar as discussions of consciousness are concerned, because it comes in both varieties: felt and unfelt (as in a teapot).

      Delete
    3. Eva Jablonka
      Goals, like feeling are loaded words. All the words we use need to be qualified. Systems exhibiting life and mind behave in an intentional, goal-directed manner. Dennett's intentional stance is a way of expressing this property of dynamic organization, which is applicable to all living organisms and to all products of organisms (such as robots) that show a goal-directed behaviour. Mayr (1982, The growth of Biological Thought, p.47) suggested that when an entity has a goal which is guided by a plan it displays teleonomic activities. Hence a robot has telonomic activities, just as does the amoeba, or the thirsty dog, since it is designed according to a plan or a program. So the intentional stance and the ascription of teleonomy applies to robots as well, and of course to thermostats. However, there is a big difference between living organisms and entities designed by living organisms like thermostats: the goal is not intrinsic to the robot or the thermostat, so there is a fundamental distinction between these entities with extrinsic and intrinsic goals (Kant' s third critique of teleological judgment was devoted to this issue). The books to which I referred provide an in-depth analysis of these issues which are central to the philosophy of biology. Felt needs are new types of goals, which appeared in living biological organisms in the context of the evolution of learning (open-ended, flexible associative learning). At later stage of evolution, with the evolution of symbolic language, new goals emerged, the kind of values to which humans (sometimes) strive.

      Delete
    4. ORIGINS

      Yes, robots are designed by people and organisms are designed by evolution. But what difference does that make, and how does it explain that organisms feel (e.g., goals) and robots don't (if they don't, even at T3 level!).

      If we have a functional mechanism that can do certain things, what difference does its origin make -- i.e., whether it grew on a tree or was crafted on a workbench?

      Delete
  4. Thanks for the references Eva. I have read some of Thompson, but had not heard of Juarrero or looked at Mind in Life, which looks to be right down my ally. 

    I agree that self organization does not, in and of its self, directly engender teleological properties. But I have argued (drawing on likes of Maturana and Varela, Prigogine, and Millikan) that such properties only arise in systems whose identity is maintained over time by a characteristic cyclical exchange of matter and energy with the local environment. Such systems cease to exist if they fail to control this exchange. It is in their need to do so that, I think, we find a necessary (but perhaps not sufficient) basis for the attribution of the most basic sort of normative properties, in the form of homeostatic goals or values. 

    Crucially though, such properties are only apparent when one take the system's "point of view", and thus conceive of it as a kind of agent. As Searle has argued, part of what makes the problems of both intentionality and consciousness so difficult to tackle with objective, scientific methods is that we are dealing with properties that result from the existence of particular, and therefore necessarily subjective, points of view. I think the explanation of origin of normative properties suffers from a similar sort of conundrum, and that finding the right way to conceive of this dilemma will be crucial to any satisfactory account of the evolution of consciousness. Thus, while I agree that the capacity to integrate information from multiple sources is, plausibly, necessary for conscious experience to occur, we won't understanding why this may be the case until we have a better grasp of how all forms of information are agent dependent, and that this integration is integration by and for individual agents and or their progeny.

    is being made by and for understanding of information odinformation itself is an agent dependent property


    Sent from my iPad

    ReplyDelete
  5. I think Doctor Jablonka’s approach is extremely interesting. I am partial to enactive and autopoietic conceptions of cognition and experience; IMHO, she gives an excellent account of how something as complex as representational cognitive states could emerge from inanimate matter.

    However, I do find that it shows the same incapacity to address the mechanisms of felt experience. It is not at all obvious to me why associative learning should be felt. Indeed, if one postulates that felt sensations arose, and thereafter were retained for adaptive purposes, one nevertheless has to address the question of why and how such felt experience should occur in the first place. In answer to my question, Doctor Jablonka said that feeling is a kind of doing. Such a doing would be an overall, integrated, persistent, embodied, and categorizing sensory state, that has evolved as a facet of associative learning. It is interesting to speculate, as she does, that once feeling becomes commonplace, it should become the “new telos” of biological systems; however, this is not an explanation of how or why feeling evolved.

    I believe Doctor Jablonka gives an intriguing explanation of how complex cognition could evolve from an autopoietic point of view, but that she doesn’t address the problem of how and why felt experience arises per se.

    ReplyDelete
  6. To which extent associative learning would be related to consciousness? or in other words, would associative learning be possible without any kind of phenomenal consciousness?

    ReplyDelete
  7. Associative learning (AL) and even unlimited (i.e. very flexible) associative learning (UAL) does not entail experiencing. Clearly we can make robots who can manifest UAL and they are not experiencing being. Nor are genetic algorithms, which are implemented in some computer programs, making these programs living entities. Such programs are indeed unlimited heredity systems, but only in a biological system, in the context of the actual evolutionary history of life, they are (according to Maynard Smith and Szathmary ) a criterion for living. Similarly UAL entails experiencing in the context of the evolution of animals, animals that are endowed with various precondition characteristics (such as multicellular organization nervous system, possibly cephalization, adaptive plasticity at several levels including basic learning mechanisms etc.). Only in the context of such organization, in which we argue UAL had actually evolved, does it make sense to argue that UAL entails experiencing; in other words, for UAL to evolve in animals, a lot of complex biological features have to be presupposed; the same is true for unlimited heredity – in a biological system, unlimited heredity can only evolve which has certain autopoitietic properties. The fact that we can technologically implement some process (such as unlimited heredity or UAL) which is formally analogous to a biological process in artificial systems can be (sometimes) quite illuminating, but it is certainly not telling us how analogous process occurred in biological evolution. Searle is making the same point in his talk.
    We realize that the dynamical description of experiencing we can offer is at present rather limited, and it is our job to flesh it out. However we think that the difference between experiencing/feeling and neural-embodied dynamics is not a category difference that cannot be bridged. Just as living is not categorically different form certain organizational autopoieitic (e.g. chemoton) dynamics. We think that once the dynamical organization that entails experiencing will be clarified we shall understand experiencing in these terms. What we suggest is not such a model but we believe that UAL is a good tool with which we can try and progress in constructing such a model, because we think that it is a good candidate for a complexity threshold. Note that complexity threshold is a criterion not a model. Moreover, there is a big gray area, and it may well be the case that experiencing in a very limited sense appeared earlier (for example,. already in cnidarians), but the early version were limited and could not evolve any further without additional evolutionary innovations involving new memory processes, which led to UAL to the global yet specific categorizing sensory states we call CSSs which enabled animals to discriminate, predict and be motivated to act. We shall be able to say more when we have better embodied dynamic models of experiencing.

    ReplyDelete
  8. THE LOGIC OF ENTAILMENT

    "UAL entails experiencing [only] in the context of the evolution of animals… endowed with various precondition characteristics (such as multicellular organization nervous system, possibly cephalization, adaptive plasticity at several levels including basic learning mechanisms etc.)"

    Why?

    ReplyDelete
  9. It is a shame Dr. Jablonka's talk didn't come sooner in the summer school. I found that having her ideas in mind as a framework while going through my notes and watching some of the talks again really put things back in place.
    I do have a question regarding the possible consciousness of "not alive" entities that show associative learning. How can we be sure that biological beings are the only ones that can feel? Maybe I am getting something wrong or not understanding though...

    ReplyDelete
  10. I really like the idea of comparing the search for the defining criteria of consciousness with the historical search for the defining criteria of life. Perhaps, when we define all the ‘doing’ , ‘feeling’ will be explained, the same way there is no vital force. It is hard for me to think that consciousness would be more than a or multiple neural event. But I suppose even if we do fully define it, we have still not explained the why or how, just the what.
    When rethinking about this talk, I also was wondering along the same lines as Laurence: What would it take to attribute consciousness to a non-biological being? Is life/being alive a criterion of consciousness? Could a robot achieve unlimited associative learning? Is there any reason to believe it couldn’t?

    Izabo Deschênes

    ReplyDelete
    Replies
    1. What would it take to attribute consciousness to a non-biological being?

      T3

      Is life/being alive a criterion of consciousness?

      No, but so far it's an invariant correlate

      Could a robot achieve unlimited associative learning?

      Yes.

      Is there any reason to believe it couldn’t?

      No.

      Delete
  11. I like this approach very much but conscioussness is the most heavy loaded word of all the other mentioned before. I think we just don't need it, James's "specious present" has been effectively deconstructed. As long as we have memory and trace or "remembered present" as in Ebelman's work we son need to invoke such a metaphysically loaded word. There is no Hard problem at all. Memory constitutes our sensation of "the present" and makes us belive we do have a thing or "experience" called conscioussness.

    ReplyDelete