tag:blogger.com,1999:blog-2234592903154254594.post3731237634294337796..comments2023-10-07T07:20:25.103-07:00Comments on Turing Consciousness 2012: Inman Harvey - No Hard Feelings: Why Would An Evolved Robot Care?Unknownnoreply@blogger.comBlogger15125tag:blogger.com,1999:blog-2234592903154254594.post-18019278962835748472012-08-10T04:21:27.408-07:002012-08-10T04:21:27.408-07:00LOGIC AND LOGODAEDALY
@Matthew
I couldn't po...<b>LOGIC AND LOGODAEDALY</b><br /><br />@Matthew<br /><br />I couldn't possibly enlighten you because I think the confusion here is entirely Inman Harvey's! And his 3-tailed horse (just like Michael Graziano's squirrel) is an attempt to dispel the confusion with word-play.Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-6537255961505705992012-08-09T21:59:52.492-07:002012-08-09T21:59:52.492-07:00I still don't totally understand Dr. Harvey...I still don't totally understand Dr. Harvey's argument that the hard problem only exists because of linguistic confusion. He provided one example that relied on there being multiple definitions of the word "no", but I don't understand how this generalizes to the hard problem. His alternative argument, that it's futile to treat the subjective objectively, makes much more sense to me (whether or not I agree with it). Can anyone enlighten me regarding this "linguistic confusion"?Anonymoushttps://www.blogger.com/profile/13795726068676548183noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-23430408775062265492012-08-06T13:48:42.790-07:002012-08-06T13:48:42.790-07:00DOINGS: SYMBOLIC, ROBOTIC & NEURAL
@Diego: &q...<b>DOINGS: SYMBOLIC, ROBOTIC & NEURAL</b><br /><br /><i><b>@Diego:</b> "Dr. Harnad defines consciousness as feeling. [But] feeling [is] not really defined by… doing…. Therefore, the Turing test is extremely limited"</i> <br /><br />Consciousness is feeling. Doing is doing. Doing is our only way to mind-read (correlations, mirror-neurons). The Turing test can only do as well, not better.<br /><br /><i><b>@Diego:</b> "locked-in syndrome patients [feel but] don’t show any actions. The turing test would fail…"</i><br /><br />The TT is for systems we've designed, to test whether they can do what feelers can do.<br /><br />A robot that could only exhibit locked-in syndrome would fail the TT.<br /><br />(There are physiological tests for locked-in syndrome; nothing to do with TT; neurologists' mind-reading is a bit more refined than ours. But neural activity is still doing [T4].)<br /><br /><i><b>@Diego:</b> "a robot [with] the networks necessary to feel, but… does not interact … like the locked-in patient… would fail… the Turing test"</i><br /><br />Correct. And the only one who could ever know whether it feels is the robot.<br /><br />But the TT is to test whether a robot can do anything a normal feeler can do: not just what a lock-in patient can do (which is next to nothing).<br /><br />The TT is not a test of feeling: it's the best we can do to test for doing capacity indistinguishable from our own. That it feels is just a hope -- since it's the best we can do.<br /><br /><i><b>@Diego:</b> "millions and millions of paper instructions of what to say in response to every question possible that can be asked in the turing test (which is a finite number). Just papers saying IF you hear this question, THEN say this."</i> <br /><br />Sounds like you've tumbled back into the Chinese room, and T2 to boot. Check Searle's argument and the symbol grounding problem to get yourself up to speed on this. Until further notice, computationalism would fail even if it could pass T2. But T3 (robotic capacity) is needed to ground T2.<br /><br />(You're wrong that all possible conversations are finite, but the capacity to engage in all possible conversations rests in a finite capacity (though not just a computational one).<br /><br /><i><b>@Diego:</b> "A complete test of consciousness should be one in which, by looking at the hardware alone, one would be capable of knowing whether that hardware… will generate feeling [with or without] action."</i><br /><br />That's T4 (robotic doing + neural doing), but you need to get there, and you certainly can't get there through neural/feeling correlates alone. (And even T4 does not explain feeling, even if it generates it.)Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-36434483161760861122012-07-31T11:18:51.497-07:002012-07-31T11:18:51.497-07:00Xavier Dery @XavierDery
Harvey says Evolutionary...Xavier Dery @XavierDery<br /><br />Harvey says Evolutionary Robotics avoid the problem of understanding how the system works... It just wants to observe it at work!? #TuringC<br /><br />10:36 AM - 1 Jul 12 via Twitter for AndroidXavier Déryhttps://www.blogger.com/profile/06524854063079690566noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-30861199287646393622012-07-27T09:31:16.297-07:002012-07-27T09:31:16.297-07:00Dr. Harnad defines consciousness as feeling. If co...Dr. Harnad defines consciousness as feeling. If consciousness is all about the feeling, then it’s not really defined by the doing, by actions. Therefore, the Turing test is extremely limited, because it assumes a definition of consciousness based on action and not on feeling. For example, locked-in syndrome patients are thought to be conscious, but they don’t show any actions. The turing test would fail to categorize that persons as conscious. Likewise, assume hypothetically that I could indeed create a robot that has all the networks necessary to feel, but one that does not interact with the external world, just like the locked-in patient. The Turing test would fail to categorize this robot as conscious. Inversely, I can generate a series of millions and millions of paper instructions of what to say in response to every question possible that can be asked in the turing test (which is a finite number). Just papers saying IF you hear this question, THEN say this. The turing test judge may not be able to know that the entity on the other side of the wall is just a bunch of paper instructions, and may categorize that bunch of papers as conscious. A complete test of consciousness should be one in which, by looking at the hardware alone, one would be capable of knowing whether that hardware at work will generate feeling, be in the presence or in the absence of action.Anonymoushttps://www.blogger.com/profile/09257462860243751896noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-9576722710857261862012-07-14T21:16:08.189-07:002012-07-14T21:16:08.189-07:00(Originally posted on FB.) Great lecture! One deta...(Originally posted on FB.) Great lecture! One detail: I was not entirely convinced by Dr. Harvey’s claim that scientific and philosophical interest in phenomenal consciousness arises from a linguistic confusion. There is a pretty clear explanation as to why the conclusion that there is a horse with three tails is fallacious. If we translate the fallacious argument in first-order predicate logic, we will notice that there is a confusion regarding the scope of the quantifiers involved. In first-order predicate logic, the conclusion that there is a horse with three tails simply does not follow. (This is in line with Quine’s famous argument in ‘On What There Is’ that in order to determine whether something exists, we need to rephrase our scientific theories in first-order predicate logic. If our theories (so rephrased) quantifies over the purported object, then it exists.) But it does not seem to me that we can use a similar logical trick to explain phenomenal consciousness away.Anonymoushttps://www.blogger.com/profile/12747519028502353388noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-42051167659875449462012-07-13T09:52:55.335-07:002012-07-13T09:52:55.335-07:00"a smoke-alarm is conscious_00378 of smoke&qu...<b><i>"a smoke-alarm is conscious_00378 of smoke"</i></b><br /><br />A smoke-alarm responds to smoke, just as a billiard-ball responds to a clunk from another billiard-ball. If every dynamic Newtonian interaction in the universe is conscious, the universe is a lot more animate than what we had thought (about some biological organisms, on one small planet)!<br /><br />No, this just a symptom of loose use of ambiguous weasel-words. Only (some) organisms feel (sometimes), and smoke-detectors do not. And feeling is what the hard problem is about. (How? Why?)<br /><br />And whereas there are no doubt countless different things that organisms feel and can feel, qualitatively and quantitatively, the fact they feel is one fact, and an all-or-none one, not a matter of degree.Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-65325333370895541582012-07-12T12:34:06.623-07:002012-07-12T12:34:06.623-07:00I do not share the enthousiasm about the idea of z...I do not share the enthousiasm about the idea of zillions types of conscioussness. I prefer the idea of levels of consicoussness or the idea of steps, found in the ToM for example. If Harnad is right and conscioussness is all about feeling, I prefer the idea of a continuum of ways to feel, developed during our phylogenetic history.Roxane Campeauhttps://www.blogger.com/profile/01066829025824430026noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-7253121293240063532012-07-11T11:01:53.220-07:002012-07-11T11:01:53.220-07:00Thank you for your response Dr. Harvey, much appre...Thank you for your response Dr. Harvey, much appreciated. How, I am curious, do you respond to Searle's distinction between ontological and epistemic subjectivity? Consciousness_* is ontologically subjective. I think you will agree. But if we do away with dualism, then questions about about consciousness_* can be epistemically objective, thus amenable to scientific investigation.Nico Sheppard-Joneshttps://www.blogger.com/profile/06708998539468230469noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-86669158029883619832012-07-06T07:02:50.259-07:002012-07-06T07:02:50.259-07:00For the conscious_n type of awareness, I use this ...For the conscious_n type of awareness, I use this to refer to *all* the zillion types of awareness one can conceivably think of, and invent operational test for. So yes, all organisms have various kinds of consciousnesss_n; and even simple mechanisms have simple kinds of consciousness_n. So a smoke-alarm is conscious_00378 of smoke, but not conscious_00679 of time; whereas an alarm clock is conscious_00679 of time, but not conscious_378 of smoke. These are objectively definable properties, and operationally testable -- you can take them back to the shop and get a refund if they don't work. Similarly all the conscious_n properties of humans (and indeed animals and flowers and bacteria) are, as I define the term conscious_n, operationally testable.<br />BUT there is another sense of consciousness, namely what I define as consciousness_* (to d make clear it is different from any or all of the conscious_n), that goes beyond these objectively testable kinds; the touchy-feely kind, the subjective 'I am enjoying the red sunset' kind. And this is subjective, not objective, cannot be operationally tested-for -- and hence it is absolutely pointless, indeed meaningless, to try and 'build this into a robot'. Similarly it is meaningless to discuss whether a horse or a dog or a robot 'has' this consciousness_*.<br /><br />Inman HarveyInmanhttps://www.blogger.com/profile/09374796968984602455noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-33555898704146985222012-07-05T10:18:27.254-07:002012-07-05T10:18:27.254-07:00I do agree that without providing a specific set o...I do agree that without providing a specific set of criteria to define consciousness (unless I missed something...), Harvey's attributions of consciousness seems too broad (e.g. a smoke alarm being conscious when it *senses* smoke). As we move towards a better understanding of consciousness, the concept of consciousness must be scientifically useful. Does any organism, for Dr. Harvey, NOT have consciousness? If so, the answer will help better define what is meant by consciousness in Dr. Harvey's account. If not, then the concept loses its semantic usefulness, and I we better describe what he is labeling as 'conscious' with other, more agreed-upon terms.Nico Sheppard-Joneshttps://www.blogger.com/profile/06708998539468230469noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-35306889987115886092012-07-04T20:37:42.219-07:002012-07-04T20:37:42.219-07:00Harvey’s concept of a zillion types of consciousne...Harvey’s concept of a zillion types of consciousness existing seems a bit broad to me. Does almost everything have a type of consciousness then? Does all it require is sensing/detecting a stimuli and reacting to it (as Martha commented above this doesn’t seem like a gradation of consciousness, at least the detecting/sensing a stimuli part). It would have been nice to have more examples and defining characteristics, (I still need to finish reading some of the papers he recommended (so many to keep up with!), maybe I will find more examples there!) <br />I also agree with ATufford that a sense of what unites all these zillion forms of consciousness/a shared characteristic was missing and would be interesting to consider. <br />As to him claiming that robots have a type of consciousness. Did any of the other speakers agree with this view? Do people agree that his 3-tailed horse is more less equivalent to Graziano’s squirrel (as Dr. Harnad has mentioned)?<br />I guess I was under the impression that it either feels like something to be something or it doesn’t? how does this fit into Harvey’s zillion types of feeling/consciousness? <br /><br />Izabo Deschênes, McGillI.Dhttps://www.blogger.com/profile/01672233589036735578noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-80941372009441720842012-07-03T17:06:55.957-07:002012-07-03T17:06:55.957-07:00Harvey seems to define Consciousness_1...Conscious...Harvey seems to define Consciousness_1...Consciousness_n as objective, observable types of access and agency but Consciousness_* as inherently subjective and so not truly existing (or at least observable) out in the world. <br /><br />Our species, our present bodily condition, and the contexts from which we develop and that from which we now reside, all affect the state of our minds. So follows that differences in consciousness do occur (e.g. you sober vs. drunk). There is no absolute standard or threshold, only observable effects of mental function.Anonymoushttps://www.blogger.com/profile/01087738979022544876noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-85570856682812187802012-07-03T14:42:27.654-07:002012-07-03T14:42:27.654-07:00I guess sensing a stimulus is just one piece of th...I guess sensing a stimulus is just one piece of the pie - you've got to have computation and output (whether that output is internal in the form of a 'thought' or external in the form of overt behaviour), and a sense of agency. I do agree that this is sufficient. <br />Harvey failed to define what is it that unites these forms of consciousness- what is their shared characteristic which allows us to assign them all as shades of consciousness?ATuffordhttps://www.blogger.com/profile/00412746101592413773noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-3207224453295041882012-07-03T13:17:01.513-07:002012-07-03T13:17:01.513-07:00I liked Harvey's idea of zillions of types of ...I liked Harvey's idea of zillions of types of consciousness(s), up to where it includes a robot sensing a stimulus as consciousness. I don't think that sensing a stimulus is a gradation of consciousness; can we agree that consciousness is an additional process?Martha Shiellhttps://www.blogger.com/profile/15327296650316006983noreply@blogger.com