Sunday 1 July 2012

Stevan Harnad: How/Why Explaining the Causal Role of Consciousness is Hard

Abstract: There are two things that cognitive science needs to explain: (1) How and why organisms can do all the things they can do and (2) how and why organisms feel. Explaining doing -- Turing's problem -- has been dubbed the "easy" problem (though it's no easier than other problems in biological science, and we're nowhere near solving it). Explaining feeling has been dubbed the "hard" problem. The reason it is hard is that feeling keeps on turning out to be superfluous in any causal explanation of doing.

Harnad, Stevan (1995) Why and How We Are Not Zombies. Journal of Consciousness Studies 1:164-167.
Harnad, S. (2000) Correlation vs. Causality: How/Why the Mind/Body Problem Is Hard. Journal of Consciousness Studies 7(4): 54-61.
Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.   
Harnad, S. (2011) Doing, Feeling, Meaning And Explaining. In: On the Human.
Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49:433-460.

Comments invited


  1. I agree that T3 is the right level. I wonder if you give Turing too much credit for having thought of that, however. Embodiment seems to have been quite outside his concern. Verbal doing yes, but not behavior in the world. You cast it as an accident of him not wanting the obvious appearance of robots to throw things off. But is there any evidence he thought that imitation would be usefully extended beyond verbal competence?

    1. You are right to ask. I have no evidence. It's just because it's such a trivial point that I have difficulty imagining that it was not obvious to him.

    2. I just checked the 'Computing Machinery' paper and he does at least countenance outfitting machines with sense organs (last page of the paper):

      We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

      I think, however, the idea that embodiment, and our specific embodiment, is important is a relatively recent idea, and one he probably would have had difficulties agreeing with.

    3. Yes, I remember the part about adding sensor/effectors, but once that's allowed, other dynamic components are possible too. Turing *may* not have anticipated embodiment -- or maybe he did, judging from his work on growth and morphology...

  2. Dr. Harnad defines consciousness as feeling. If consciousness is all about the feeling, then it’s not really defined by the doing, by actions. Therefore, the Turing test is extremely limited, because it assumes a definition of consciousness based on action and not on feeling. For example, locked-in syndrome patients are thought to be conscious, but they don’t show any actions. The turing test would fail to categorize that persons as conscious. Likewise, assume hypothetically that I could indeed create a robot that has all the networks necessary to feel, but one that does not interact with the external world, just like the locked-in patient. The Turing test would fail to categorize this robot as conscious. Inversely, I can generate a series of millions and millions of paper instructions of what to say in response to every question possible that can be asked in the turing test (which is a finite number). Just papers saying IF you hear this question, THEN say this. The turing test judge may not be able to know that the entity on the other side of the wall is just a bunch of paper instructions, and may categorize that bunch of papers as conscious. A complete test of consciousness should be one in which, by looking at the hardware alone, one would be capable of knowing whether that hardware at work will generate feeling, be in the presence or in the absence of action.

    1. The Turing Test is primarily about creating a model that can do *everything* a healthy normal human being can do, not just what a locked-in syndrome patient can do, which would teach us nothing.

      The T3 is not a test of consciousness. It is a test or performance capacity (doing). Turing's argument is that if you can design a robot that can pass it, so you cannot tell it apart from a real person, then you have no better (or worse) reason to believe or not believe that the T3 feels than you have with a real person.

      This is called the other-minds problem.

    2. I agree with your point that the T3 is not designed to be a test of consciousness; to say this is a misunderstanding of theT3 itself. In lieu of this though, one can rightly question the utility of the T3 at all, aside from a purely hypothetical exercise. Wouldn't we be much better off trying to define tractable properties and processes of consciousness (as Searle, Dennet, etc. have attempted), which can then be applied to the situations above in place of the T3? Why must we stress the importance of the T3 at all, if it moves us no closer to answering the questions which remain unanswered?

    3. @Roberto

      Well, if you have no reason to doubt that a T3-passing machine does indeed feel,
      and if feeling = consciousness (according to Harnad),
      then T3 could be considered as a phenomenal consciousness test.

    4. @Roberto

      What "tractable properties and processes of consciousness"?

    5. @Seb
      You're right to say that if a machine passes a T3, we have no reason to deny that that machine is conscious. This does not get us any closer to explaining why we are conscious, or even how we are conscious.

      @Dr. Harnad
      Requisites and attributes of consciousness which can be objectively assessed, rather than relying on a subjective inference of the existence of consciousness. Unfortunately, the community of scientists who study consciousness do not seem to be anywhere near such a definition of consciousness, which only adds confusion to field; so often (even at this conference) scientists talk past each other when trying to explain consciousness, because their subjective notions of consciousness are not congruent.



      Each of us knows exactly what it feels like to feel. No need for definition there. What's missing is a causal explanation of how and why we feel.

      Yes, there are an awful lot of synonyms and weasel-words for consciousness, creating a lot of confusion -- and the false sense of making inroads on explaining consciousness. (That's why I suggest we give them all a rest and call a spade a spade: feeling.)

      Objective, observable correlates of organisms' doing capacity as well as of feeling are all we ever get. But, in principle, all of doing capacity looks as if it will prove explainable in the usual way. Not so for feeling.

      And the "hard" problem is not the other-minds problem (which is that we cannot observe or measure the feelings of others, just their correlated doings, bodily and neural), although the other-minds problem certainly doesn't make the hard problem any easier!

      Remember that even if God told as a T3 (or T4) robot really feels, that still would not solve the hard problem of how and why -- just the easy problem of how and why it can do what it can do.

  3. Yes, the T3 is for a generic, normal, healthy, behaving human. A T3 for a paralyzed, comatose person could be "passed" trivially, and would tell us nothing.

  4. I think Dr. Harnad mentioned that if a particular feature appeared during evolution, and if it has remained, then it must have an adaptive function. This makes sense if the appearing features are either adaptive or non-adaptive. But what if a certain trait emerges and is neither adaptive or non-adaptive-a fluke. This trait could be passed on from generation to generation, even though it has no adaptive function. Maybe a trait doesn't have to be adaptive to remain, perhaps it just needs to not be non-adaptive. The phrase "survival of the good enough" instead of "survival of the fittest" comes to mind.

    1. Interesting point. Indeed, not every feature of an organism is due to adaptation. This was Gould and Lewontin’s (1979) argument in their famous 1979 paper "The Spandrels of San Marco and the Panglossian Paradigm". Not every feature of an organism is the result of adaptation to selective pressures coming from the environment. An obvious example is the human chin. We did not evolve chins adaptively (because for instance chins would brace the impact of a fall, protecting our teeth from being shattered by an impact, and so forth).
      Humans chins are supposed to have evolved because the physiology of our skulls, in particular our facial configuration, changed as a result of our adaptations to the environment. In developing tools and using fire, a lesser workload was imposed upon our facial musculature and teeth; these reconfigured, and it is this adaptive reconfiguration that yielded human chins. The chin did not evolve as a result of selective pressures per se; searching for an adaptationist explanation here would be searching for a “Just-So” story.
      Perhaps it is the case that, in searching for an adaptationist explanation of consciousness, we too are searching for a “just-so story.”

    2. Thanks for the example! I am sure we can find many more. The subject came up again yesterday and I couldn't help but think that saying that something MUST have an adaptive function is similar to saying that something/someone MUST have created us. I don't believe that there is a function/purpose to everything. Sometimes things just happen.


      Gualtiero Piccinini (UMO, St Louis) Is Consciousness a Spandrel?
      will be about this possibility.

      I don't find it surprising that butterflies see a lot more color than they need to, because the extra wave-lengths are some sort of fluke or fellow-traveller.

      But I would find it very surprising if the most fundamental feature of existence -- consciousness (feeling) -- were just something like that...

    4. Louisa, I agree with you that we should not always assume that our characteristics have adaptive functions - this is one of the inherent dangers of approaching human beings from the standpoint of evolutionary psychology! It is certainly possible that conscious awareness is a byproduct of the way in which other (adaptive) functions are processed in the brain, and I look forward to Sunday's talk on the subject. Nevertheless, I also agree with Stevan on this point: it's hard to imagine that such a complex and all-pervasive characteristic, one that colours every aspect of our lives, would have no purpose. And from a very basic point of view, we all have experiences everyday that intimate towards the adaptive usefulness of feeling. I have sex because it feels good, I eat because I feel bad when I don't, etc. This makes me think, actually - perhaps the primary function of consciousness is to motivate appropriate approach and avoidance behaviour - to guide the animal towards what it needs and away from what will harm it. For without the associated feeling, where would the motivation come from, why would I bother?!

    5. That's an interesting point, Sarah. If we follow that point you make at the end of your comment, what's to say that animals and plants and all other beings in this world don't have feelings? They definitely follow certain approach and avoidance behaviours and if this qualifies them as conscious, how can we justify killing them and eating them?

    6. @Noemi : I must disagree with your last proposition : how can we justify killing and eating organisms that feel ?
      Whether an organism feels or not seems to me as a very bad criterion for making value judgment on what organism we should eat or kill. Firstly, because there is no way (so far) to find out if an organism other than yourself has feelings, and second, because the very fact that we have evolved brains that allows us to have this conference partly relies on the fact that we have evolved a nutrition that is incompatible with this proposition.


      @Sebastien Tremblay

      "Whether an organism feels or not seems to me as a very bad criterion for making value judgment on what organism we should eat or kill."

      Well, yes, a serial killer with a time-bomb wrapped around him making his way toward a crowd does feel, but that's a very bad criterion for deciding not to kill him...

      But with a feeling sparrow and someone who feels like eating moineau sans tête sauce chasseur, that's not so evident.

      If it's ok because "there is no way (so far) to find out if an organism other than yourself has feelings" then by the same token you can eat your neighbour.

      And if it's ok because "we have evolved a nutrition that is incompatible with this proposition" then that's simply incorrect:

      We have evolved a nutrition in which we can survive and be completely healthy without eating feeling organisms (and thereby, as a bonus, also able to feed a lot more human beings).

    8. @Stevan Harnad

      Please refer to my last comment on the Fernando Cervero’s thread for the issue value judgment towards animals’ lives.

      Here I would like to add an important precision to my proposition.

      You said: “If it's ok because "there is no way (so far) to find out if an organism other than yourself has feelings" then by the same token you can eat your neighbour.”

      But it is certainly not OK to kill an animal on the sole basis that is has no feelings.
      I would not eat my neighbor for a lot of reasons other than the one that he is a feeler.

      My initial argument is that we should not make value judgment of the lives of animals on the sole basis of an (biased) intuition that some of them might have feelings. As far as I know, our laws and policies are not dependent on the assumption that human have feelings, and I believe that they would hold even if one day it is proven that consciousness is an illusion.


      Yes, of course there could be "laws" in a feelingless world. Robots could have traffic lights, traffic cops and traffic tickets. Maybe they could even "evolve" in a world with bio-zombie organisms instead of feeling ones, if there could be one. Laws (and law enforcement) control doings. They also are doings, just as computations are.

      But who cares? Certainly no one and nothing in a feelingless world. The "laws" would in fact just be "laws of nature," rather like F = ma.

      So the hard problem remains: why do organisms in the real world feel? Survival and reproduction just requires that they do what needs to be done to survive and reproduce? How and why are we not just feelingless Darwinian survival machines?

      And the only "laws" that matter matter only because organisms feel.

  5. Harnad believes that feelings fulfill a function, and that is why they have survived natural selection. We are supposed to be talking about what this function might be but I'm not sure that we've gotten many possible explanations so far (besides Damasio and homeostasis).


      Actually, I have doubts that it is possible to explain how and why feelings evolved because not only does it look as if feelings are superfluous (biological evolution is about doings) but it also looks as if there is no causal role for feelings as an independent causal force in the world (like gravity of electromagnetism): The four fundamental forces (all doings) are the only causal degrees of freedom we have, and it looks as if they can do any biological job on their own...

  6. Floreano demonstrated that prosocial, e.g. altruistic, behaviour evolves in the human and other species as an adaptive function for survival of groups, often when inter-related (where the effect is stronger). You equate consciousness with feeling, where feeling seems to act as motivation to perpetuate all levels of homeostasis. why must we assume that feeling is anything other than this, simply a manifestation of pro-survivalist urges and tendencies that increase biological fitness?

    from this perspective, the question of consciousness as perhaps unanswerable and surely not empirically quantifiable should largely disappear. who are we to judge that, just because we think and feel at a meta-level, we are so different from the animals from which we are often so prone to distance ourselves? when consciousness becomes measurable in its effects, I think so eventually do its causes.

    1. Prosociality and maintaining homeostasis are useful -- in robotic simulations as well as in adaptive servomechanisms.

      But how and why are they felt?

  7. I cannot understand anything if I do not use concepts ; and a working concept represents indirectly things it does not represent. With the definition of a car, for example, we are able to tell what a car is and what it's not (e.g. "A car is a vehicle, but not a teapot"). With the definition of "feeling" we do not have the same conceptual counterpart : I've never been dead, nor in a coma ; I'm not even sure that my sleep is feeling-less! I'm not sure to be able to define properly what a feeling is and what it's not because I cannot prevent myself to feel. So is it possible to argue that consciousness is feeling if one does not know what the concept of feeling refers to?

    1. No one knows quite what having/using concepts (ideas? thoughts?) is (maybe computations, maybe some sort of dynamic process).

      But robots seem to be able to do some of what you can do, without feeling a thing.

      Our concepts/ideas/thoughts are felt, but how? why?

      Seems like one can generate some (eventually maybe most or all) of our capacities without feeling.

      So what causal role does feeling play?

    2. My point is that nobody knows what non-feeling is. Therefore the notion of "feeling" is uncomplete. I can experience the difference between a car and a teapot, but I cannot experience the difference between feeling and non-feeling. From a logical point of view, "feeling" is really weird : experience is like the result of an equation we have no access to and feeling is a constant in the equation itself.

    3. You are right that there is something anomalous about the category "feeling" -- or, more particularly, the category "what it feels like to feel" -- because we are unable to sample its complement ("what it feels like to not-feel").

      Uncomplemented Categories, or, What is it Like to be a Bachelor?

  8. About the hard problem: the why/how of the feelings. To answer the "why" question, I have seen this reasonning many times: "Could we do this without feelings? If yes, feelings can't be an adaptative advantage for this behavior, or action, and can't explain the "why". But what if we asked "would we do it?". Would we do all these things the way we do them or would we do these things at all without feelings? And then ask ourself what are the adaptative advantage of the "would" vs the "wouldn't".

    1. Without feelings, "we" wouldn't be there. Could what we can do be done without feelings? Perhaps not, but if not, it remains to explain how and why not.

  9. At the begining of your talk you said that feelings are what make things matter to us. Without feelings, nothing would mather to nobody. My question is : does this conception of feelings implies a conception of the self ?

    1. Yes, humans don't just feel, but feel they have a self. I'm not sure whether Aplysia have a feeling of being a self continuing across time. But if they feel at all, they have the full-blown hard problem, just as surely as we do.

      And also, even if it's only "ouch," things matter to Aplysia too.

  10. I agree that the hard problem is about consciousness, and might even agree that consciousness is just feelings (depending on how that term is then cashed out). But I disagree that intentionality is synonymous with consciousness, or that it is just a "weasel word" for consciousness. I think intentionality in Brentano's sense is neither necessary nor sufficient for consciousness.


      Brentano said that "intentionality" -- "aboutness," or the fact that thoughts and words are always *about* something, they have an "intended object" -- was the "mark of the mental".

      What is a *mental* state or process if it is not felt? Just an internal state or process (whether internal to a brain, a robot or a teapot).

      The string of symbols that is a sentence in a book, or in a dynamic computer programme, is only "about" something because it is systematically interpretable (by external interpreters like us) as being about something. Their "aboutness" is completely parasitic on the "aboutness" in our heads. (Searle will remind us of this on the last day.)

      And that aboutness in our heads is felt. Otherwise it may as well be going on in a teapot.

    2. Yes, Searle distinguishes between original and derived intentionality to overcome these issues, but neither kind of intentionality is "synonymous" with consciousness.

  11. Stevan holds that there are two things that cognitive science needs to explain: (1) How and why organisms can do all the things they can do and (2) how and why organisms feel. I agree with this view, but I am wondering (provocatively): Would it make sense to ask (2*) how and why do some things NOT feel?

  12. Of course the question "how and why do some entities feel some things, sometimes" is intimately related to the question "how and why do other entities manage to do what they can do without seeing."

    In fact, "How and Why Do We Feel" is equivalent to "How and Why Are We Not Zombies":

    1. I like this comparison, it surely rings a bell regarding what we call feeling. The gap between zombie mind and human mind seems to gather a lot of phenomenons related to the "what it feels to be human ?" question. Great thought experiment.

  13. Stevan says that either (1) there is something it feels like - and thus one is conscious, or (2) there nothing it feels like - and thus one is not conscious.

    I agree with this distinction, but for the sake of intellectual interest I will try to be provocative.

    Why could we not (at least as a logical possibility) say that (2) "there is nothing it feels like to me", is equivalent to (2*) "it feels like nothing to me"? In this case, it would not be the case that there is no "object" of feeling, but rather that the "object" is an empty set of feelings. This is just a matter of arbitrarily selecting a treshold for what counts as consciousness.

    By adopting 2* we could argue that every doing comes with a feeling, just that most feelings feel like nothing. If this were the case, any doing would come toghether with some feeling. Then, the question of why and how we have feelings would appear as urgent as that of knowing why and how there are doings.

    I'm curious of know what you (and everyone else, of course) thinks about it.


      Not feeling X is not the same thing as not feeling anything at all: There are plenty of things I don't feel -- including things I feel now, and then I stop feeling. The difference between feeling X and not feeling X is easy. The difference between feeling and not-feeling is not only different from this, but it is something we can never know at all (probably does not make sense), because we cannot feel what it feels like to not feel anything at all.

      We know what it feels like to feel. No one and nothing knows what it feels like to not-feel. Not even a teapot.

    2. My claim was the stronger, namely: There is something it feels like to feel nothing at all (i.e., "feeling nothing at all" is one peculiar kind of quale).

      You say that since we cannot feel what it feels like to not feel anything at all, we cannot KNOW the difference between feeling and not feeling.

      However, if - for the sake of the argument - you buy my definition, than there is no more such traditional (your) distinction between feeling and not feeling anymore. Thus, there is nothing more that needs explaining; at least not anymore than explaining the difference between what it feels like to perceive the world with human eyes and with a sonar-based system, a feeling that - as that of "nothingness" we cannot feel.

      In this view,"non-feeling anything at all" is a quale such as "redness", or any other. Its QUALITY is certainly peculiar, but that is precisely the definition of qualia. I don't see any problem there.

      You argue: "We know what it feels like to feel. No one and nothing knows what it feels like to not-feel. Not even a teapot."

      My reply would simply be: Nobody KNOWS what it feels like to not-feel, but that does not imply that a vast number of things cannot FEEL what it feels like to not-feel. I take it that having a feeling of "nothingness" and not knowing that seems to be a logically consistent idea precisely because of the QUALITY of that particular kind of feeling.

      A teapot might not KNOW what it feels like to not-feel, and yet FEEL "nothingness". I agree that this might appear counterintuitive, but I don't see how this is logically contradictory unless you hold that feeling always entails knowing (which, I think, is not the case).

      (I have not yet read your article, so I apologise if I'm not referring to that here)


      Feeling what it feels like to not feel is a contradiction in terms. I'm afraid that anything, and the opposite of anything, follows from that.

      "What it feels like to feel" is a lop-sided category, because it is impossible to sample its complement ("what it feels like to not-feel") because its complement is self-contradictory.

      Nevertheless the category "what it feels like to feel" is not empty, and this Summer Institute was on how and why its contents got there...

    4. So am I correct in generalizing this debate to the archetypal "a lack of a thing is not a thing"?


      No, an uncomplemented category is worse than that. A non-color is not problem: a sound is a non-color, and it's a thing. Absence is a thing; it's complemented by presence, which is a thing. (Things are categories: things we can tell apart.)

      But the complement of what it feels like to feel would be what it feels like to not-feel; Not what it feels like to not-feel sad, or not-feel that the time has slipped by. That's not a problem either. The problem is what it feels like to not-feel at all. That's self-contradictory, like what's the color of a colorless thing.

      That does not mean it doesn't feel like something to feel. It just means that what it feels like to feel is a category that is likely to have anomalies, because it is not well-instantiated, with positive and negative instances, like most other categories. (Feeling is not the only uncomplemented-category. Existing is another.)

      See: "Uncomplemented Categories, or, What Is It Like To Be a Bachelor"

  14. posted on FB during the talk

    Harnad said one of the reasons that there is a hard problem regarding the explanation of why and how we feel is that the we only know of four physical forces (gravitity, electromagnetism, weak, strong) and they cannot account for feelings. He seemed to be implying that if there were a fifth force (and we found it in the recent future somehow), then maybe we could explain how and why we have feelings. But how would positing a new force help explain how and why we have feelings? Forces only account for how certain particles move around each other. By his own criterion, they could only explain 'doing', i.e. observable movements.

    Good point. But if feeling were a 5th force, then, like the other 4 forces, we could accept them as just given fundamental properties of the world, like gravitation. We don't ask how/why is there energy, etc. But the point is that feeling is not a 5th fundamental force: all evidence contradicts that; the other 4 are enough to cause and explain everything.

    If everything is explained, we don't need a 5th force... If we manage to explain everything about the brain through neural activity, will there be a need for feelings?

    That's the point. It looks as if there is no need for feelings. Yet they are there. And no doubt they are caused by the brain? How? and Why? -- since there's no need, and all doings are fully explainable without any recourse to feelings.

    Is the feeling thing a philosophical question or a scientific question?

    It's a biological question: Organisms feel: How? Why?

    Even if we record all the neural activity of an organism, the only way to know the corresponding feeling is for it to tell you. So if you could record all the activity in a human brain while it reports what its feeling, shouldn't one just map onto the other somehow?

    Pointing that the problem of feeling is the same that the problem of meaning and saying it is probably impossible to solve made me think about the "interdit de la société linguistique de Paris de 1866" about the origins of language because it sounds like a kind of advice about telling stories...
    I mean, to assert that the "hard problem" is a biological question, is a way to remind us that we do not have access to the origins of feeling or meaning. We can only focus on conditions that allow feelings/meaning in our body, in our brain, without absolute causality or determinism; substrats are not causes but conditions that can be described.
    What I understood partly from Harnad then is that the "why" question could only be answered by scenarios, which is not suitable, so the problem remains non solvable. But still a crucial one!


    "According to Pr. Harnad conception of consciousness, feelings, self-awareness and consciousness are synonymous. But what about the restricted number of species which pass the mirror test? If only a limited number of species have self awareness (great apes, dolphins, some birds, elephants) does it means that all the other species don't have consciousness because they don't have self awareness? but what about species that clearly have feelings (such as cats) but are not self-aware?"

      "There is mirror self-recognition with or without feeling. (You can design a (toy) robot to recognize itself in the mirror.) All those other words for consciousness are either exact synonyms for feeling (like experiencing, having qualia, etc.). or they are feeling + something more (felt self-recognition in the mirror, felt sense of self, feelings about having feelings). All other species that feel are conscious, whether or not they recognise themselves in the mirror and whether or not they have a concept of "self" (not the same thing as mirror recognition)."

      "I don't think I really understand the full meaning of "awareness"... do we attribute the same meaning to awareness in "self-awareness", "body-awareness", other-awareness", and so forth?"

      "It would be interesting to see if the equivalent of the TPJ would be similar in animals that can recognize themselves and those who can't!"

      "are self-awareness and self-recognition two different things in terms of feelings?"

      "Unless it is felt, "awareness of," "consciousness of" etc. all just mean access to some information. And information is just data, unless it is felt. What Dan Dennett should have said about "access consciousness" vs. "phenomenal consciousness" was that there is no "access consciousness": there is only access to information. That information can either be felt or unfelt. If unfelt, then it's just mindless information-processing (i.e., doing). If it is felt, then we have the only consciousness there is, which is feeling (so why bother with the long and redundant synonyms "phenomenological consciousness"? -- But of course that's not what Dan said. Rather, what he said was that there is no phenomenological consciousness. What we mean by consciousness is just access (i.e., just doing, no feeling -- or rather what we mean by feeling is just certain doings -- the "accessible" ones we can talk about...)"

      "Thanks, I understand much better now! I think the idea of felt and unfelt desires we pay attention to it. Why do we have kinds of sata that will never reach consciousness (or be felt) whereas others can reach consciousness? do you think this might have any adaptive function? is there any advantage for the brain to discriminate data that need continuous processing (the unfelt) and those that need to be strenghen in function of the environment by increasing an emotional response caused by an increased attention (the felt)? do you know if this question has already be tackled in the study of consciousness?"

      "The hard part is felt information. Unfelt information processing is "easy": Why is it not all unfelt? The adaptive advantage comes from the doing: information-processing is just doing. That leaves the ("hard") question of how/why some of it is felt unanswered."

      "what about this hypothesis: it all about learninng, if nothing felt no learning. that's why machine can't learn"

    2. Why do you need to feel to learn? (And machines certainly can and do learn.)

  16. Originally posted on facebook

    question to Steven talk: do you think that feeling (in term of hard problem) is only human?

    No. I think most organisms (invertebrates, vertebrates, mammals) feel. That's why I'm a vegan!

    It's hard to tell. Animals understand things does it feel something for them to understand? It's possible. But do they feel for example fear of losing their child, fear of never having sex again I'm not sure. Maybe their "mind" is just not as complex as ours.


      You don't need to be able to feel the fear of losing a child in order to be able to feel pain.

    About the Harnad talk, take the following thought experiment: we can imagine a specific disturbance that would specifically affect all the feelings of someone, however in this case we do not want to say that such a human is now a Zombie. What do you think about that ?

    All the feelings as in all the sensory feelings or do you mean all the feelings like feeling you understand, feeling you believe, feeling you are yourself? If it's what you refer to then why wouldn't you say the person is zombie-like? It's like she is not there anymore? Almost as if she was in the coma?

  18. Xavier Dery ‏@XavierDery

    Glad to hear Harnad say 'T3 is the goal'. Indeed, the brain having been thrown together by evolution, emulating has to be simpler #TuringC

    9:45 AM - 1 Jul 12 via Twitter for Android

  19. I'm still trying to wrap my head around the implications of equating feeling to consciousness. I think for the most part, this approach is useful, but I am finding some implications troubling.

    I am particularly troubled about the implications at the neuronal level. If feeling is an all-or-none phenomena, then presumably instances of feeling (punctuated, e.g., by episodes of deep sleep) share something in common. No matter the nature or intensity of inputs, there must be a shared quality to all our instances of feeling. Would you agree? If so, how then must we envision the relationship between the phenomenological experience of feeling and the neuronal substrate of feeling? Can it be anything other than a one-to-one association, meaning that for any instance of feeling we are likely to activate a same set of neurons? This seems unlikely. I was just curious as to what you hypothesize the neural substrates of feeling look like(one or many?)


      Regardless of whether feeling has one unique neural correlate or many, the problem is not finding the neural correlate of feeling but explaining how and why it causes feeling (rather than just doing).

      (Reminder: both organisms' bodily movements and their neural activities are just doings.)

    2. I am not sure I understand your point. How can answers to the problem not be informed by finding the neural correlates of feeling? The neural correlates of feeling are a form of doing. Yes. But if we are interested in the function of feeling, then it seems natural that loss-of-function experiments might provide important clues to the function of feeling. It seems natural, for instance, that induction of a transient lesion by temporary deactivation of neural activities that are responsible for feeling (the 'doing that generates the feeling'), would yield valuable insights into the 'why' question.

      In other words, feeling can't occur without some form of doing (unless ones espouses some form of dualism...). The doing is amenable to scientific exploration. The feeling as a subjective experience is not. If we successfully modulate the 'doing that generates feeling', then we will have a chance to learn lots about the function of the feeling.


      Here is an (invented) example of the kind of neural correlation that will certainly help us understand (and predict, and maybe even remedy) feelings: The finding that neural activity in region R at an early age is predictive of uncontrollable rages at a later age, and that early pharmacological or neurogenetic intervention can cure this.

      But what this kind of correlation cannot tell is is how or why neural activity causes feelings at all. The correlations would have been the same if the neural activity had been correlated not with the feeling of rage, but just with the tendency toward aggressive behavior (doings).

      The question of function/feeling correlation and causation is a very subtle one.

    4. My claim is the following: I don't see how feeling can be anything but a 'special' kind of doing (neural activity). The only (erroneous) reasons we have for doubting this were, I believe, rightly put forth by Searle: that the existence of feeling is apprehended through subjectivity.

    5. @Nico
      Well, yes, feelings are felt ("the existence of feeling is apprehended through subjectivity") rather than just done. But explaining how and why they're felt rather than just done is the problem!

    6. You agree that there are two aspects to feeling ("feelings are felt rather than just done"). These can be defined as (i) the 'doing' that generates the 'felt'; and (ii) the subjective 'felt'. (i) is open to empirical investigation since it is brain activity. (ii) is a subjective experience. (i) and (ii) fall in different ontological categories, but there still is a causal link between the two: simply, if there is no (i), then there is no (ii).

      If we find the neuronal activity that does (i), and disrupt it transiently, then because of that lesion, there will be no (ii). If feeling has a causal role, then any effects following disruption of (i) will be sufficient to give a a function to feeling.

      Following this logic, disruptions of (i) could produce a proper zombie. If the zombie acts any differently from a non-lesioned counterpart - or acts in a ways that are obviously harmful to its own survival, then a function for feeling will have been found. Classic loss-of-function experiment.

      I am certain you will disagree. Where, in your opinion, does my logic fail?

    7. THE LOGIC OF LESIONS I (1 of 2)

      @Nico: "You agree that there are two aspects to feeling ("feelings are felt rather than just done")."

      No, I never said anything about "aspects": Feelings are palpably something we feel, not something we do. (And like every other person with common sense, I believe the brain somehow causes feelings, somehow. I just want to know how and why.)

      @Nico: "These can be defined as (i) the 'doing' that generates the 'felt'; and (ii) the subjective 'felt'."

      To repeat, "aspects" is either vacuous, or yet another weasel-word, and there is nothing being "defined" here. We each know we feel. And surely the brain causes our feeling: how? why?

      (And why the redundant weasel-word "subjective"? Is anything felt that is not subjective? Is there such a thing as "objective" feeling?)

      @Nico: "(i) is open to empirical investigation since it is brain activity."

      Brain activity is brain activity. And brain activity is doing. No more mileage to be derived from that: We don't know how or why brain activity generates feeling.

      @Nico: "(ii) is a subjective experience."

      Feeling is feeling (Why all this formalism?) "Subjective" is redundant and "experience" is just another ambiguous weasel-word for feeling.

      @Nico: "(i) and (ii) fall in different ontological categories,"

      What on earth do the weasel-words "ontological categories" add (or mean?) here. Are apples and oranges in different "ontological categories"? They're just different kinds of things. So too are objects and movements: no "ontology," just different kinds of things. Ditto for "abstract" and "concave" (sic). Well, doings and feelings are different too.

      The problem is explaining how and why organisms feel.

      @Nico: "but there still is a causal link between the two: simply, if there is no (i), then there is no (ii)."

      No brain, no feeling. Absolutely correct. (Now how does that explain how and why the brain causes feeling?)

      @Nico: "If we find the neuronal activity that does (i), and disrupt it transiently, then because of that lesion, there will be no (ii)."

      No brain activity X, no feeling. Again true. But how does that explain how and why brain activity X cases feeling? (Do you think that an explanation of how and why X causes Y just consists of showing that if there is no X there is no Y?)

      @Nico: "If feeling has a causal role, then any effects following disruption of (i) will be sufficient to give a function to feeling."

      This sounds like a re-statement of the suggestion that an explanation of how and why X causes Y just consists of showing that if there is no X there is no Y.

      (part 2 follows)

    8. THE LOGIC OF LESIONS I (2 of 2)

      @Nico: "Following this logic, disruptions of (i) could produce a proper zombie."

      Do you think that logic is going to tell us whether or not zombies are possible? Not even the Turing Test can do that!

      A zombie would be an organism or robot that can do anything a feeling organism can do, so you can't even tell them apart, but it doesn't feel.

      How does the "logic" of disruptions tell you whether or not there can be a zombie? And how would you tell whether or not something was a zombie?

      @Nico: "If the zombie acts any differently from a non-lesioned counterpart - or acts in ways that are obviously harmful to its own survival, then a function for feeling will have been found. Classic loss-of-function experiment."

      If the lesion causes the organism to act differently from the unlesioned organism, then you will have shown that the lesion causes a difference in doings. You will not have shown whether or how the brain causes feeling. Even if a patient says "your lesion took away all feeling from my arm," you will just have found a lesion that causes anaesthesia.

      No, Nico, the "hard" problem is harder than that -- and cannot be solved by "logic" alone.

  20. Dr. Harnad, I would appreciate a clarification of your conception of the hard problem. I've heard you at times state that the hard problem is a question of how and why we feel. At other times, you state it is simply a question of why we feel. These two versions of the hard problem are very different, and only one of these is consistent with the hard problem articulated by Chalmers and consequently discussed and debated.

    [For those who may not share the confusion, when describing the hard problem Dr. Harnad states: "I know how to explain all of this stuff… the dynamical stuff… planets revolving, apples falling, neurons squirting, organisms behaving, organisms behavioural competence. Once Turing’s program [T3] is over, we will have the answers to all of those. But in what sense will that explain the fact that organisms feel? How do they feel… how is it they can feel at all, and in a sense, even more fundamentally, why do they feel? And that why as I repeat is not teleological… What is the functional role of the fact that they feel.” ~23 mins. This version of the hard problem initially includes how/why, and later stresses simply the why. To my best recollection, this how/why description of the hard problem was reiterated in personal conversation, and as a result some of my peers were led to believe the hard problem is a how/why problem. Dr. Chalmers, in contrast, differentiates between how (easy) and why (hard) problems as follows: "The easy problems - explaining discrimination, integration, accessibility, internal monitoring, reportability, and so on - all concern the performance of various functions. For these phenomena, once we have explained how the relevant functions are performed, we have explained what needs to be explained. The hard problem, by contrast, is not a problem about how functions are performed. For any given function that we explain, it remains a nontrivial further question: why is the performance of this function associated with conscious experience? The sort of functional explanation that is suited to answering the easy problems is therefore not automatically suited to answering the hard problem."]

    I believe that explaining the hard problem as how and why we feel confuses two very distinct questions. The how, to me, is simply a "doing" problem, making it a tractable problem that may be experimentally tested and empirically solved; hence it is an easy problem. The epistemic question of why is irrespective of the how (as you've stated in your response to Nico above). So why, when explaining the hard problem, do you describe it as a question of how and why we feel?

    Hearing a direct clarification of this would very much assist me in understanding your perspectives.


    Roberto, you've conflated the "easy" problem of how and why organisms can do what they can do with the "hard" problem of how and why organisms feel.

    Yes, I sometimes use "why" as short-hand for how/why, but as I also always stress, the why is a causal, functional why, not a moral or teleological one.

    Here is an illustration of how "how" and "why" questions are functionally interchangeable:

    HOWdoes a thermostat keep the temperature constant? (It turns on the furnace when the mercury drops below the set temperature.)

    WHY does a thermostat turn on the furnace when the mercury drops below the set temperature? (To keep the temperature constant.)

    It is precisely this sort of simple, natural causal explanation of how and why a physical system works the way it does that does not work when you are trying to explain how and why organisms feel. No such problem -- in principle, though we certainly haven't yet succeeded in practice! --with explaining how and why they can do what they do.

    This "hard" how/why problem is a problem because the causal degrees of freedom have already been fully (and successfully) used up in explaining all doing (the "easy" how/why problem). The four fundamental forces (electromagnetism, gravitation, etc.) are enough; all causes and effects are accounted for (in principle).

    Our grandparent's intuitive explanation of how and why we feel would have been perfectly fine -- feeling is an independent causal force (telekinetic dualism) -- would have been just fine as a solution to the hard problem. Trouble is that all evidence is that it is completely wrong.

    I don't see eye to eye with Dave Chalmers, by the way, either on computationalism or the "hard" problem (which Dave named, but certainly didn't invent!)

    Harnad, Stevan (2012) The Causal Topography of Cognition. Journal of Cognitive Science 13(2): 181-196

    Harnad, S. (2001) Harnad on Dennett on Chalmers on Consciousness: The Mind/Body Problem is the Feeling/Function Problem. (unpublished)

    Harnad, S. (2000) Correlation vs. Causality: How/Why the Mind/Body Problem Is Hard. Journal of Consciousness Studies 7(4): 54-61.

    1. Thanks, I do believe I can more clearly understand your position. That's not to say I would accept it as my own, however.

      It seems as though you have a conceptual wall up separating "doing/easy" and "feeling/hard". As you say, explaining all of the neuronal activity of the brain in terms consistent with the four fundamental forces will not explain how we feel. On this point, we fundamentally disagree, though at this time neuroscience does not have to tools to prove your position wrong. I can only provide useful analogies for other natural phenomena which have seemed similarly "hard", but eventually yielded to the scientific method. Perhaps there will be room to expand on this in the paper.

      I intentionally conflate the easy and the hard problems, because I believe that the mystery of the hard problem will dissolve when we fully understand how the brain produces feeling; since feelings are generated within the brain, understanding everything about what the brain is doing when it produces subjective feeling will allow us to explain the hard part of how we feel. To think anything other than this seems vitalist. Only when we explain how we feel will we be able to properly understand why we feel, because then we will be able to define feeling in terms that are more descriptive than the unavailing "everyone knows what it is to feel".


      Ok, I look forward to your analogies. But to save some time, do look in advance at the prima facie reasons why the vitalism/animism analogy doesn't work.

      The wall (which I certainly didn't invent) is called the "mind/body problem." All I've done (to clear the air and focus efforts) is to suggest that in reality it's the feeling/doing (or the feeling/function) problem.

    3. Thanks for the link - the discussion always improves when someone is willing to provide prima facie points to build upon, so that's much appreciated.

      I still don't understand how in your linked thread, you rightly claim that the brain causes feeling (as every monist will agree), yet maintain the position that feeling is so fundamentally different from doing: the brain's "doings" cause feeling! We have no reason to doubt that explaining everything the brain does will explain how we feel — aside from the fact that we have not mechanistically explained feeling as one more of the brain's "doings" yet. I know you would not believe that nothing can be explained if it is not yet explained, so I wonder what could possibly lead you to believe that how we feel is such a hard problem, so different from how the brain does everything else that the brain does.


      My monism (like everyone else's) is an act of faith: I'm as sure as I am of any other truth about the world that the brain causes feeling, just as it causes doing -- but I want to know how and why.

      And the trouble is that every how/why explanation turns out to be an explanation of how/why the brain causes doing, not how/why it causes feeling.

      The feeling is there, alright, tightly correlated with brain doings. But correlation is not causation, and what we are looking for is a causal explanation.

      In fact, it looks very much as if feeling is somehow causally superfluous, even though it is caused by the brain, because no one has a clue of a clue as to how/why all the brain's doings could not be caused in exactly the way they are caused, without causing feelings too.

      So the reason no analogy with previous successful explanations of doings -- whether from astronomy, physics, engineering or biology (including the origins, evolution and nature of life and living organisms) -- gives the slightest reason to believe that the same will work for feelings is that feelings are not doings.

      Feelings are different. And although they are no doubt caused (somehow, and perhaps for some empirical or even functional reason) by the brain's doings, there is not even a hunch of a hunch about the how/why of that "somehow".

      I may be wrong that the hard problem is insoluble. But I doubt that I'm wrong that not only is every proposed solution so far dead-wrong (and obviously dead-wrong) but the special nature of feeling makes all the analogies of prior successes that make people think there's hope into equally obvious disanalogies, hence equally wrong.

      The optimistic vitalism/animism analogy is the first and most common of these hopes: "We had thought explaining life was a "hard," perhaps insoluble problem, but it turned out to soluble (and not even so hard!). So will explaining the mind."

      Unfortunately not. There was no further fact of the matter to account for, in explaining what it is to be alive. It turned out to be just doings, like everything else, animate and inanimate. Once all the facts of life were accounted for, there was nothing left. The (dualistic) notion that there was the for an extra "vital force" was not only wrong, but had in fact been obviously wrong from the very beginning: Life is just doing, like everything else.

      But with explaining what it is to have (or be) a mind, there is something extra, and that is feeling. And although there must be something the brain does to cause feeling, feeling itself is not doing. So we are left with a hard problem of causal explanation: how and why does the brain cause feeling? What is the causal role of feeling?

      We may never be able to answer, but it's perfectly natural to ask -- and to go on asking. (And to keep debunking putative answers that are obvious non-starters…).

      (I happen to think that even behind vitalism, the source of (some) people's doubt that it could be explained was in fact an implicit animism: They felt that life is different because they assumed that it feels like something to be alive.

      Which it does, for all feeling organisms (but not, I dearly hope, for plants!)

    5. You state your fundamental critique of the life analogy to be that life was always just a bundle of objective properties that have now been fully explained; hence there was nothing hard about the problem of life. Life was never more than MRS GREN in good measure. I follow you to this point, and believe you've stated this matter admirably, and in terms more eloquent than I certainly could. 

      I meet your next point with reservation. "…the reason no analogy with previous explanations of doings… gives the slightest reason to believe that the same will work for feelings is that feelings are not doings." Life was a concept philosophized as something too incomprehensibly complex to be caused be doings; thus life was not believed to be governed by physical laws. However, hermeneutic description of life as a complex integration and interrelation of physical doings (metabolism, respiration, sensitivity, growth, reproduction, excretion, nutrition; all individually necessary but insufficient) yielded a fundamental description of life that was tractable and empirically possible to validate (and has been). 

      Through a similar process we will understand how feeling is produced; I believe this because we are unsatisfied with the only level of reductionism that we know to be correct (feeling is caused by neural activity), and we know that feeling is not caused by an aphysical psychokinetic force. For this reason, I think the dismissal of any attempt to reify feeling as it's own component parts (some of the candidate models you have pointed to in our discussion on Dr. Baars' talk) on the supposition that hermeneutic proposals are merely correlational and cannot be causative is improper, or at worst counter-productive.



      Here's what I think you're missing:

      (1) Life was not "hermeneutically described (i.e., interpreted)": It was causally (and fully) explained (reverse-engineered). MRSGREN were (some of) the properties to be explained, and biochemistry, biophysics, and physiology causally explained them, with no left-overs.

      (2) Moreover, MRSGREN (and the rest of life's properties) always were just doings. So there were never any substantive grounds for supposing there had to be a mystery (other than finding the right physical explanation).

      (3) In contrast, I have to remind you that feeling is something rather different from MRSGREN (and all other known physical properties): It is unobservable, yet it is real -- but the only reason we know it is real is that we are each feelers, and have felt it (though one can only be sure about oneself).

      (4) In addition to being unobservable by anyone but the feeler, it also looks as if there is no room for feeling to have any independent causal role of its own, whether functionally or evolutionarily: All doings are explicable without feeling; so feeling seems to be causally superfluous.

      (5) There was never any property like that in the case of life -- except, of course, feeling itself, which is a biological property of (some) living organisms (and, I suspect, may have been the real underlying reason why people had thought life was special, and inexplicable in the usual way). The mnemonic should be MRSGRENF!

      (6) You seem to think that not taking hermeneutic interpretations to be causal is somehow improper and counter-productive: I wonder why?

      (7) Interpretations are not causal explanations: The word that comes to mind, if they are nevertheless taken to be causal explanations, is not proper or productive but credulous!

      It's natural to want to have a causal explanation for something as fundamental as feeling. And if you don't believe me that a causal explanation may not be possible, that's fine (I'm not sure I believe myself on that): I hope you keep trying. But that does not mean dropping your guard and accepting hermeneutics instead!

    7. Thanks for your response. I believe you've well identified where our opinions diverge, and I'll take some time to reflect on your positions.

      I'll first address your question from though 6), and I'll try to adopt your language from 1).

      Through reverse-engineering, we were able to understand life as a sum if its constituent parts, all of which were doings. I believe our explanation of consciousness will be a similar one (i.e., consciousness is never going to be reduced to activity of a single locus, but rather a sum of its constituent parts, which account for our experiential subjectivity, unity, intentionality, etc). I know that you don't believe that this explanation of consciousness is at all suitable because it uses ontologically objective processes (brain activity) to describe a subjective phenomenon, but this is one area where our opinions diverge.

      If what I believe turns out to be correct (which you have every right to doubt), then to dismiss theories that attempt to explain consciousness through study of its constituent parts (ontologically objective patterns of brain activity, some of which you have listed) would lead us away from answering how consciousness is produced by the brain; hence, I say dismissal of such attempts is improper, or at worst counter-productive.

    8. Dr. Harnad, like I've mentioned above, I believe our opinions diverge on the matter of whether explaining all of the brain's doings will explain how we feel, and offer us the key to understanding why we feel.

      As a result of the conference, I find myself constantly learning new terms (with all of their esoteric nuance and rigidity) to describe rather intuitive concepts that I had not been previously exposed to as a physiologist.

      One of these is property dualism. In my grasp of the topic (informed largely by Chalmers and Searle), I understand property dualism to be the view that mental and material entities cannot be reduced to each other since their fundamental properties are not common; each of feeling and matter is a fundamental property in its own right. This seems to me much like your offer that feeling is not doing, and doing is not feeling.

      I'd like to ask whether (if you accept my basic summary of property dualism) 1) you would describe yourself as a property dualist (if some other iteration of this concept, please identify). If so, I'd ask 2) whether you believe there is any evidence to support the inherent supposition of property dualism that feeling is not doing, aside from the obvious fact that we have not yet explained feeling through doing.



      I have no ontological position or interest, so it would be a waste of time to try to classify my "ism." As far as I'm concerned, the only stuff there is in the universe is matter/energy, as the physicists tell us. But matter and energy do have lots of latent properties, such as those of life, which evolved here on earth. I am sure feeling is one of those properties. All I ask is for an explanation of how the brain generates feeling, and -- more important, because it is a functionalist question, not just about how feeling is caused, but about what functional/adaptive it plays -- why the brain generates feeling.

      "Property dualism" -- like all the other metaphysical "isms" one can espouse in contemplating the mind/body problem -- is completely vacuous. It is just a statement of a belief, with no explanatory power whatsoever. I'm looking for an explanation.

      When I say feeling is not doing, I am making a simple empirical, methodological observation: Doings are observable, by anyone, with either senses or measuring instruments. (In addition, there are other, unobservable things -- such as (maybe) unbound quarks -- whose existence is merely inferred on the basis of what turns out to be necessary to explain other things that are observable. Let's forget about such theoretically driven unobservables, as they are irrelevant here.)

      So the doings of matter/energy, including atoms, planets and organisms, are observable, and causally explainable.

      Feelings are not observable by anyone except the feeler. That already puts them in a class by themselves. In addition, organisms' doings look as if they will be completely explainable causally without any need for feelings. Yet feelings really are there. So they look causally superfluous. That seems unlikely, since feelings are so ubiquitous in organisms. But then what is their causal explanation, and causal role? (How and why do organisms feel, rather than just do?)

      "Isms" aren't the answer.

    10. Stevan: "So the doings of matter/energy, including atoms, planets and organisms, are observable, and causally explainable."

      What is the difference between an observation and a feeling?

    11. @arnold: "What is the difference between an observation and a feeling?"

      Anyone can observe your atoms but only you can observe your feelings.

    12. Stevan: "Anyone can observe your atoms but only you can observe your feelings."

      Your answer misses this essential point:


      Descriptions of feelings can be covert (e.g., expressed only within your brain), or public (expressed openly for others to observe). Scientific descriptions are always public descriptions of covert feelings because descriptions derive from observations of feelings. Why should we consider the public description of the feeling of an observed atom to be essentially different than the public description of an observed triangle as in my SMTT experiment?



      Yes, all observations are felt observations, whether they are observations of the meter-readings on a geiger counter or observations of your own momentary mood. But anyone can read the same geiger counter, whereas only you can read your own mood.

      (Please don't reply about people misjudging their own moods in hindsight, nor about psychophysicists "measuring" other people's sensations. These both miss the point and beg the question.)

      And the same question applies to both meter-readings and mode-readings: Why and how does either of them feel like anything at all? (Except that moods, unlike meter-readings, would not exist at all, if they were unfelt.)

      You have not given even a clue of a clue to the answer, Arnold, no matter how sanguine you feel about the explanatory power of your retinoid model. All you have done is come up with an interpretation of your model which feels like it squares with what it feels like to have visual experience. Correlations.

      Without even dwelling on the fact that this is all just visual, I can repeat to you -- but it will do no good -- that all you are doing is hermeneutics (interpretation). You are not even touching the fact that visuomotor function (let alone any cerebral function) is felt, let alone explaining it.

      But you still keep making the same point over and over, and I keep replying to it the same way over and over. I really do think it's time to stop now. This is not a forum on the retinoid model.

    14. @Stevan

      Put the retinoid model aside. I have described what a feeling (any feeling) is like for me. What is a feeling like for you? Have you ever had a feeling that wasn't something somewhere in a spatio-temporal relation to you?

    15. @Stevan

      You are right when you say "Explaining the Causal Role of Consciousness is Hard". But what isn't described can't be explained. So until you give a description of what consciousness/feeling means for you, it is unlikely that any causal explanation of consciousness, no matter how well it satisfies scientific norms, will satisfy you.


    Stevan: "... feelings are not doings."

    On what principled grounds do you make this flat pronouncement?

    I counter with the claim that feeling (conscious experience) must BE a particular kind of brain activity. I have detailed a theoretical model of the kind of brain activity that constitutes feeling (autaptic-cell activity in the brain's retinoid space). I have tested the implications of the retinoid model in experimental trials. The results of the experiments support the retinoid model of feeling. Previously inexplicable instances of feeling are also predicted/explained by the neuronal structure and dynamics of the retinoid model. Do you believe that the retinoid model is not a credible candidate theory of feeling only because you believe that *feelings cannot BE brain activity*?

  23. @arnold

    I do not believe the retinoid theory of feeling because it is not a theory of feeling, it is a theory of "perspectival/volumetric" doing. Dubbing perspectival/volumetric doings "feeling" is just interpretation, not causal explanation.

    1. Stevan: "Dubbing perspectival/volumetric doings "feeling" is just interpretation, not causal explanation."

      Yes, but providing the essential details of the neuronal structure and dynamics of the brain mechanisms that give us a perspectival volumetric representation of the world around us (the retinoid system) IS a causal explanation of feeling within the norms of science.

      Following the implications of the retinoid model of feeling, we can systematically induce in your brain a variety of feelings that cannot be otherwise explained except by the neuronal activity of the retinoid mechanisms.

      Here is an example:

      1. Subjects sit in front of an opaque screen having a long vertical slit with a
      very narrow width, as an aperture in the middle of the screen. Directly behind
      the slit is a computer screen, on which any kind of figure can be displayed and
      set in motion. A triangular-shaped figure in a contour with a width much
      longer than its height is displayed on the computer. Subjects fixate the center
      of the aperture and report that they see two tiny line segements, one above
      the other on the vertical meridian.

      2. The subject is given a control device which can set the triangle on the
      computer screen behind the aperture in horizontal reciprocating motion
      (horizontal oscillation) so that the triangle passes beyond the slit in a
      sequence of alternating directions. A clockwise turn of the controller
      increases the frequency of the horizontal oscillation. A counter-clockwise
      turn of the controller decreases the frequency of the oscillation. The subject
      starts the hidden triangle in motion and gradually increases its frequency
      of horizontal oscillation.


      As soon as the figure is in motion, subjects report that they see, near the
      bottom of the slit, a tiny line segment which remains stable, and another line
      segment in vertical oscillation above it.

      As subjects continue to increase the frequency of horizontal oscillation
      of the almost completely occluded figure there is a profound change in their
      experience of the visual stimulus.

      At an oscillation of ~ 2 cycles/sec (~ 250 ms/sweep), subjects report
      that they suddenly see a complete triangle moving horizontally
      back and forth instead of the vertically oscillating line segment they
      had previously seen.

      As subjects increase the frequency of oscillation of the hidden
      figure, they observe that the length of the base of the perceived triangle
      decreases while its height remains constant. Using the rate controller,
      the subject reports that he can enlarge or reduce the base of the
      triangle he sees, by turning the knob counter-clockwise (slower) or
      clockwise (faster).

      3. The experimenter asks the subject to adjust the base of the perceived
      triangle so that the length of its base appears equal to its height.


      As the experimenter varies the actual height of the hidden triangle, subjects
      successfully vary its oscillation rate to maintain approximate base-height equality, i.e. lowering its rate as its height increases, and increasing its rate as its height decreases.

      NONE OF THESE FEELINGS CORRESPOND TO THE ACTUAL RETINAL INPUT TO THE SUBJECT, YET ALL OF THESE FEELINGS ARE IN ACCORDANCE WITH WHAT THE RETINOID MODEL OF FEELING PREDICTS. It seems straightforward to me that that this is a good example of how the retinoid model of consciousness provides a causal explanation of feeling.

      For a more detailed account and additional emprical support for the retinoid theory of feeling/consciousness see here:

      and here:


      Arnold, I don't doubt at all that your model explains some dynamic illusions, but you have to remember that illusions, just like "veridical" perceptions, consist of two things: doings and feelings.

      In the Muller-Lyer illusion, for example, one of two equal-length lines looks longer, depending on whether the arrows at the end of the lines are facing in or out. Subjects can adjust the line lengths until they look of equal length, but then one of the lines will in reality be longer. A (perspectival!) model of psychophysical length recognition and judgment could, like your retinoid model, predict what combinations of inward and outward pointing arrows will produce what combination of length judgments. That's important, and it is indeed a causal explanation, but it is a causal explanation of doings: recognition and length discrimination. It does not explain why any of it feels like anything.

      The feelings are tightly correlated with the doings (so tightly, that one would have to believe in voodoo to imagine that anything but the brain causes both) and hence the model can predict the correlated feelings too; but that does not explain either why or how the brain causes those correlated feelings: just why and how the brain causes the doings with which they happen to be correlated. How and why those doings are felt doings remains unexplained (and sounds causally superfluous).


    Stevan, the retinoid model of consciousness is not a psychophysical model. It is a detailed neuronal model of a system of brain mechanisms that regulate and CONSTITUTE feeling (conscious experience).

    In the SMTT experiment that I described above, the subjects are NOT recognizing or judging an external stimulus as in a standard psychophysical experiment; they are having a vivid feeling (conscious experience) of a triangle in motion when there IS NOTHING LIKE A TRIANGLE IN THEIR VISUAL FIELD. When they adjust the width of the felt triangle to match its varying height, there is NO external object on which to base their adjustment. It is entirely an INTERNALLY constructed feeling generated by the neuronal structure and dynamics of the retinoid system. If you were to look over the shoulder of the subject, you would have the same vivid feeling of a triangle in motion, but since your retinoid system is not a duplicate of the subject's, you might not agree with his adjustment for height-width equality. The bottom line is that this experiment provides very strong evidence that the pattern of autaptic-cell activity in retinoid space IS our feeling/conscious experience. No correlation involved.

  25. Stevan wants to give an efficacious role to feelings and after all he spends so much time talking about it that it would seem that his feelings do indeed have a voice.
    But he doesnt see how this can be unless feelings are a force of nature on par with other forces which he claims is impossible.
    Given what we know of quantum physics I am not sure if it is impossible that minds can have a downward causation on the probability distributions of matter.