Sunday 1 July 2012

Inman Harvey - No Hard Feelings: Why Would An Evolved Robot Care?


    Abstract: When studying cognition and consciousness, there are three possible strategies: one can introspect in an armchair, one can observe natural cognition in the wild, or one can synthesise artificial cognition in the lab. Some strands of Artificial Life pursue the third strategy, and Evolutionary Robotics opens up a particular new approach. Whereas most attempts at building AI systems rely heavily on designs produced through introspection -- and therefore reflect the current fads and intellectual biases of the moment -- the evolutionary approach can start from the assumption that we humans are  likely to be hopeless at designing cognitive systems anything like ourselves. After all, one would not expect a nematode worm with just 300 neurons to have much insight into its own cognitive apparatus.
    The evolutionary method does not need the designer, the Watchmaker with insight. But it does need clear operational tests for what will count as cognition -- goal-seeking, learning, memory, awareness (in various objective senses of that word), communicating. We can evolve systems with many such cognitive abilities; so far to a rather limited extent with proofs of concept, but with no reason to expect any barriers in principle to achieving any behaviours that can be operationally and objectively defined. Of course, there are no operational tests to distinguish a so-called zombie from its human counterpart that has feelings, so this seems to leave unresolved the question of whether an evolved robot could indeed have subjective feelings.
    Harnad (2011) laid out one version of this issue in a paper entitled "Doing, Feeling, Meaning and Explaining'", suggesting that the Doing (that can be verified operationally) is the Easy part; whereas the Feeling, and probably by extension the Meaning are the ineffable and Hard parts. In contrast, I shall be focussing on the Explaining, and pointing out that different kinds of explanations are needed for different jobs. In particular the concept of awareness, or consciousness, has a whole range of different meanings that need different kinds of explanation. Many of these meanings can indeed be operationally and objectively defined, and hence we should be able to build or evolve robots with these properties. But one crucial sense is subjective rather than objective, and cannot be treated in similar fashion. This is a linguistic issue to be dissolved rather than a technical problem to be solved.

    Harvey, I., (2002): Evolving Robot Consciousness: The Easy Problems and the Rest. In Evolving Consciousness, J.H. Fetzer (ed.), Advances in Consciousness Research Series, John Benjamins, Amsterdam, pp. 205 -219.
    Harvey, I., (2000): Robotics: Philosophy of Mind using a Screwdriver
In Evolutionary Robotics: From Intelligent Robots to Artificial Life, Vol. III, T. Gomi (ed), AAI Books, Ontario, Canada, 2000. pp. 207-230. ISBN 0-9698872-3-X.
    Harvey, I., Di Paolo, E., Wood, R., Quinn, M, and E. A., Tuci, (2005).   Evolutionary Robotics: A new scientific tool for studying cognition Artificial Life, 11(1-2), pp. 79-98.
<>    Harvey, I., (2005):  Evolution and the Origins of the Rational Paper presented at Cognition, Evolution and Rationality: Cognitive Science for the 21st Century. Oporto, September 2002.  In: Zilhhao, Antonio (ed.), Cognition, Evolution, and Rationality. London, Routledge, 2005. Routledge Studies in the Philosophy of Science. ISBN 0415362601. 
http://www.sussex.ac.uk/Users/inmanh/Evolution%20and%20the%20Origins%20of%20the%20Rational.doc


Comments invited

15 comments:

  1. I liked Harvey's idea of zillions of types of consciousness(s), up to where it includes a robot sensing a stimulus as consciousness. I don't think that sensing a stimulus is a gradation of consciousness; can we agree that consciousness is an additional process?

    ReplyDelete
    Replies
    1. I guess sensing a stimulus is just one piece of the pie - you've got to have computation and output (whether that output is internal in the form of a 'thought' or external in the form of overt behaviour), and a sense of agency. I do agree that this is sufficient.
      Harvey failed to define what is it that unites these forms of consciousness- what is their shared characteristic which allows us to assign them all as shades of consciousness?

      Delete
    2. Harvey seems to define Consciousness_1...Consciousness_n as objective, observable types of access and agency but Consciousness_* as inherently subjective and so not truly existing (or at least observable) out in the world.

      Our species, our present bodily condition, and the contexts from which we develop and that from which we now reside, all affect the state of our minds. So follows that differences in consciousness do occur (e.g. you sober vs. drunk). There is no absolute standard or threshold, only observable effects of mental function.

      Delete
    3. I do not share the enthousiasm about the idea of zillions types of conscioussness. I prefer the idea of levels of consicoussness or the idea of steps, found in the ToM for example. If Harnad is right and conscioussness is all about feeling, I prefer the idea of a continuum of ways to feel, developed during our phylogenetic history.

      Delete
  2. Harvey’s concept of a zillion types of consciousness existing seems a bit broad to me. Does almost everything have a type of consciousness then? Does all it require is sensing/detecting a stimuli and reacting to it (as Martha commented above this doesn’t seem like a gradation of consciousness, at least the detecting/sensing a stimuli part). It would have been nice to have more examples and defining characteristics, (I still need to finish reading some of the papers he recommended (so many to keep up with!), maybe I will find more examples there!)
    I also agree with ATufford that a sense of what unites all these zillion forms of consciousness/a shared characteristic was missing and would be interesting to consider.
    As to him claiming that robots have a type of consciousness. Did any of the other speakers agree with this view? Do people agree that his 3-tailed horse is more less equivalent to Graziano’s squirrel (as Dr. Harnad has mentioned)?
    I guess I was under the impression that it either feels like something to be something or it doesn’t? how does this fit into Harvey’s zillion types of feeling/consciousness?

    Izabo Deschênes, McGill

    ReplyDelete
    Replies
    1. I do agree that without providing a specific set of criteria to define consciousness (unless I missed something...), Harvey's attributions of consciousness seems too broad (e.g. a smoke alarm being conscious when it *senses* smoke). As we move towards a better understanding of consciousness, the concept of consciousness must be scientifically useful. Does any organism, for Dr. Harvey, NOT have consciousness? If so, the answer will help better define what is meant by consciousness in Dr. Harvey's account. If not, then the concept loses its semantic usefulness, and I we better describe what he is labeling as 'conscious' with other, more agreed-upon terms.

      Delete
    2. For the conscious_n type of awareness, I use this to refer to *all* the zillion types of awareness one can conceivably think of, and invent operational test for. So yes, all organisms have various kinds of consciousnesss_n; and even simple mechanisms have simple kinds of consciousness_n. So a smoke-alarm is conscious_00378 of smoke, but not conscious_00679 of time; whereas an alarm clock is conscious_00679 of time, but not conscious_378 of smoke. These are objectively definable properties, and operationally testable -- you can take them back to the shop and get a refund if they don't work. Similarly all the conscious_n properties of humans (and indeed animals and flowers and bacteria) are, as I define the term conscious_n, operationally testable.
      BUT there is another sense of consciousness, namely what I define as consciousness_* (to d make clear it is different from any or all of the conscious_n), that goes beyond these objectively testable kinds; the touchy-feely kind, the subjective 'I am enjoying the red sunset' kind. And this is subjective, not objective, cannot be operationally tested-for -- and hence it is absolutely pointless, indeed meaningless, to try and 'build this into a robot'. Similarly it is meaningless to discuss whether a horse or a dog or a robot 'has' this consciousness_*.

      Inman Harvey

      Delete
    3. Thank you for your response Dr. Harvey, much appreciated. How, I am curious, do you respond to Searle's distinction between ontological and epistemic subjectivity? Consciousness_* is ontologically subjective. I think you will agree. But if we do away with dualism, then questions about about consciousness_* can be epistemically objective, thus amenable to scientific investigation.

      Delete
  3. "a smoke-alarm is conscious_00378 of smoke"

    A smoke-alarm responds to smoke, just as a billiard-ball responds to a clunk from another billiard-ball. If every dynamic Newtonian interaction in the universe is conscious, the universe is a lot more animate than what we had thought (about some biological organisms, on one small planet)!

    No, this just a symptom of loose use of ambiguous weasel-words. Only (some) organisms feel (sometimes), and smoke-detectors do not. And feeling is what the hard problem is about. (How? Why?)

    And whereas there are no doubt countless different things that organisms feel and can feel, qualitatively and quantitatively, the fact they feel is one fact, and an all-or-none one, not a matter of degree.

    ReplyDelete
  4. (Originally posted on FB.) Great lecture! One detail: I was not entirely convinced by Dr. Harvey’s claim that scientific and philosophical interest in phenomenal consciousness arises from a linguistic confusion. There is a pretty clear explanation as to why the conclusion that there is a horse with three tails is fallacious. If we translate the fallacious argument in first-order predicate logic, we will notice that there is a confusion regarding the scope of the quantifiers involved. In first-order predicate logic, the conclusion that there is a horse with three tails simply does not follow. (This is in line with Quine’s famous argument in ‘On What There Is’ that in order to determine whether something exists, we need to rephrase our scientific theories in first-order predicate logic. If our theories (so rephrased) quantifies over the purported object, then it exists.) But it does not seem to me that we can use a similar logical trick to explain phenomenal consciousness away.

    ReplyDelete
  5. Dr. Harnad defines consciousness as feeling. If consciousness is all about the feeling, then it’s not really defined by the doing, by actions. Therefore, the Turing test is extremely limited, because it assumes a definition of consciousness based on action and not on feeling. For example, locked-in syndrome patients are thought to be conscious, but they don’t show any actions. The turing test would fail to categorize that persons as conscious. Likewise, assume hypothetically that I could indeed create a robot that has all the networks necessary to feel, but one that does not interact with the external world, just like the locked-in patient. The Turing test would fail to categorize this robot as conscious. Inversely, I can generate a series of millions and millions of paper instructions of what to say in response to every question possible that can be asked in the turing test (which is a finite number). Just papers saying IF you hear this question, THEN say this. The turing test judge may not be able to know that the entity on the other side of the wall is just a bunch of paper instructions, and may categorize that bunch of papers as conscious. A complete test of consciousness should be one in which, by looking at the hardware alone, one would be capable of knowing whether that hardware at work will generate feeling, be in the presence or in the absence of action.

    ReplyDelete
    Replies
    1. DOINGS: SYMBOLIC, ROBOTIC & NEURAL

      @Diego: "Dr. Harnad defines consciousness as feeling. [But] feeling [is] not really defined by… doing…. Therefore, the Turing test is extremely limited"

      Consciousness is feeling. Doing is doing. Doing is our only way to mind-read (correlations, mirror-neurons). The Turing test can only do as well, not better.

      @Diego: "locked-in syndrome patients [feel but] don’t show any actions. The turing test would fail…"

      The TT is for systems we've designed, to test whether they can do what feelers can do.

      A robot that could only exhibit locked-in syndrome would fail the TT.

      (There are physiological tests for locked-in syndrome; nothing to do with TT; neurologists' mind-reading is a bit more refined than ours. But neural activity is still doing [T4].)

      @Diego: "a robot [with] the networks necessary to feel, but… does not interact … like the locked-in patient… would fail… the Turing test"

      Correct. And the only one who could ever know whether it feels is the robot.

      But the TT is to test whether a robot can do anything a normal feeler can do: not just what a lock-in patient can do (which is next to nothing).

      The TT is not a test of feeling: it's the best we can do to test for doing capacity indistinguishable from our own. That it feels is just a hope -- since it's the best we can do.

      @Diego: "millions and millions of paper instructions of what to say in response to every question possible that can be asked in the turing test (which is a finite number). Just papers saying IF you hear this question, THEN say this."

      Sounds like you've tumbled back into the Chinese room, and T2 to boot. Check Searle's argument and the symbol grounding problem to get yourself up to speed on this. Until further notice, computationalism would fail even if it could pass T2. But T3 (robotic capacity) is needed to ground T2.

      (You're wrong that all possible conversations are finite, but the capacity to engage in all possible conversations rests in a finite capacity (though not just a computational one).

      @Diego: "A complete test of consciousness should be one in which, by looking at the hardware alone, one would be capable of knowing whether that hardware… will generate feeling [with or without] action."

      That's T4 (robotic doing + neural doing), but you need to get there, and you certainly can't get there through neural/feeling correlates alone. (And even T4 does not explain feeling, even if it generates it.)

      Delete
  6. Xavier Dery ‏@XavierDery

    Harvey says Evolutionary Robotics avoid the problem of understanding how the system works... It just wants to observe it at work!? #TuringC

    10:36 AM - 1 Jul 12 via Twitter for Android

    ReplyDelete
  7. I still don't totally understand Dr. Harvey's argument that the hard problem only exists because of linguistic confusion. He provided one example that relied on there being multiple definitions of the word "no", but I don't understand how this generalizes to the hard problem. His alternative argument, that it's futile to treat the subjective objectively, makes much more sense to me (whether or not I agree with it). Can anyone enlighten me regarding this "linguistic confusion"?

    ReplyDelete
    Replies
    1. LOGIC AND LOGODAEDALY

      @Matthew

      I couldn't possibly enlighten you because I think the confusion here is entirely Inman Harvey's! And his 3-tailed horse (just like Michael Graziano's squirrel) is an attempt to dispel the confusion with word-play.

      Delete