David Rosenthal Does Consciousness Have any Utility?
Abstract: It is plain that an individual's being conscious and an individual's being conscious of various things are both crucial for successful functioning. But it is far less clear how it might also be useful for a person's psychological states to occur consciously, as against those states occurring but without being conscious. I'll restrict attention here to cognitive and desiderative states, though similar considerations apply to perceiving, sensing, and feeling; like cognition and volition, all these states are useful; the question is whether any additional utility is conferred by any of these states' occurring consciously, and I'll offer reasons to think not. It has been held that cognitive and volitional states' being conscious enhances processes of rational thought and planning, intentional action, executive function, and the correction of complex reasoning. I examine these and related proposals in the light of empirical findings and theoretical considerations, and conclude that there is little reason to think that any additional utility results from these states' occurring consciously.
If so, we cannot rely on evolutionary adaptation to explain why such states so often occur consciously in humans and likely many other animals. Elsewhere (Consciousness and Mind, Clarendon, 2005) I have briefly sketched an alternative explanation, on which cognitive and desiderative states come to be conscious as a byproduct of other useful psychological developments, some involving language. But there is still no significant utility that these states' being conscious adds to the utility of those other developments.
Rosenthal, D "Consciousness and Its Function" Neuropsychologia, 46, 3 (2008): 829-840.
WHAT MAKES UNCONSCIOUS "THOUGHTS" "MENTAL"?ReplyDelete
In reply to Emma's question about what makes unconscious processes "mental":
(1) David Rosenthal wants an *argument* for the possibility of unfelt feelings (unconscious red, loud, etc.). That's a call for an argument for a contradiction.
(2) David also says intentionality can be unconscious. (Intentionality means "aboutness".) So the there can be the unconscious "thought" that the "the cat is on the mat" because that unconscious thought is about the cat being on the mat.
But either "the cat is on the mat" is just an internal sentence (a string of meaningless symbols, as in a book or a computer, with the interpretation, hence the "aboutness" coming from the external reader or user), which is mere syntax, with no "aboutness," or it is David who must give an argument to explain how unfelt thoughts are about what they are about, or about anything at all, on their own, without the mediation of an external (thinking, feeling) interpreter.
(1) Stevan says I wanted an argument for the possibility of umfelt feelings. I don't recall that request. Instead, I asked for an argument that mental states--e.g., thoughts, expectation, desires, perceptions, sensations, emotions, etc.--cannot occur without being conscious states. My understanding is that Stevan uses the adjective 'felt' much as I use the term 'conscious'--when that term is aplied to mental states--states of the sort listed above.Delete
To reiterate what I said in discussion--and all too briefly in the talk; I regard a state as mental not by appeal to to ostensive definition, such as the giving of a list like that above--but rather by appeal to to the state's having one or another of two defining characteristics: intentional properties or qualitative properties.
Intentional properties are intentional content (that something is the case) together with what Russell called a propnal attitude (here I'll give examples: thinking assertorically, expecting, desiring, doubting, wondering, etc.). If a state has intentional properties so defined, it's a mental state. Qualitative properties are props like the redness, painfulness and so forth that occur in perceiving and sensing the existential environment and in bodily sensations.
I hold that both types of property can and often does occur without being conscious--without an individual's being at all aware of their occurring, at least unaware of their occurrence without appeal to conscious inference or observation. Intuitively, that amounts to one's being unaware of their occurring except by appeal to to the third-person technigues that we use to tell when *other* people and nonhuman animals are in those states.
What I was asking for an argument for was why one should think that this last paragraph is mistaken: Why mental states, so defined, cannot occur the individual's being at all aware of their occurrence--except by third-person means.
I think it's not a good to put things in terms of whether there can be unfelt feelings. If 'feeling' just refers to mental states and 'unfelt' simply refers to a state's not occurring consciously, noth as defined aobve, there's no problem about the occurrence of unfelt feelings. But putting the issue using those terms creates an unnecessary and, I think, theoretically loaded impression of contrdiction. So it creates an unnecessary and theoretically loaded sense that mental states cannot occur without being conscious. Putting the issue in more neutral terms therefore seems to be to be preferable.
THAT'S ABOUT STEVAN'S FIRST COMMEWNT; THE BLOG WON'T ACCEPT MORE. I'LL TRY ANOTHER WAY.
THIS IS THE REPLY TO STEVAN'S (2):Delete
(2) Stevan's skipping the part about hold an attitude toward the intentional content; I don't regard the mere occurrence of intentional content as sufficient for a state's being mental. I regard it as sufficient only if the state exhibits intentional content *and* the individual exhibits evidence of an attitude toward that content.
There are many reasons to think that intentional content occurs without being conscious. All one needs is a theoretical reason (theoretical, because one cannot in these cases go on first-person access) to identify the occurrence of a state as exhibiting intentional content. I favor am conceptual-role theories, which regard states s having intentional content if they have the kind of causal potential toward behavior, stimuli, and *most crucially* other inner states that we take *conscious* intentional states to exhibit. It's the causal potential, not one's being aware of being in the state, that's responsible for the state's having intentional content.
Stevan says that if such a state isn't conscious it is "just an internal sentence (a string of meaningless symbols, as in a book or a computer, with the interpretation, hence the "aboutness" coming from the external reader or user), which is mere syntax, with no "aboutness"." I myself see no good argument for that, being unconvinced by Searle's 1990 target article in Behavioral and Brain Sciences. I recommend in that connection my commentary on that target.
WHAT MAKES A MENTAL STATE MENTAL? (Reply to David Rosenthal's Reply 1)Delete
David and I differ (profoundly!) on what makes a mental state mental -- hence on what we mean by "mental": David thinks it's a bunch of things. I think it means one and only one thing: An internal state is a mental state if and only if it is being felt (i.e., a felt state).
If an internal state is not being felt, it is not mental, even if it occurs in the brain of an organism that is capable of mental states, even if it occurs while (other) mental states are being felt, and even if it is part of the brain substrate of a state that could eventually become a felt state.
"Mental" (as applied to "state") is synonymous with "conscious," "aware," "experienced," "subjective," "qualitative" etc. etc. as applied to "state." And in order to avoid equivocation, obfuscation and question-begging, I strongly urge sticking to the word "felt" and avoiding all those other weasel-words.
An "intentional" state is only mental if it is felt. Otherwise it is merely a state that can be interpreted (by someone who has mental states) as being about something. In that sense, an unfelt "intentional" state is more like a sentence in a book or a computer than a thought that is being thought by a feeling thinker. (And of course I don't think the "mark of the mental" is intentionality ["aboutness"], but feeling, since intentionality can be either mental [felt] or non-mental [unfelt].)
(It should be clear that on this sense of "mental," an argument that there could be unconscious mental states would indeed be an argument that there can be unfelt feelings. If I am wrong to equate mental with felt, then what I have said is indeed theoretically loaded; but if I'm right, then it is theoretically lightened! )
FELT AND UNFELT "ATTITUDES" (Reply to David Rosenthal's Reply 2)
Attitudes, like intentionality and internal states in general can be felt or unfelt. If unfelt, they are not mental. A sentence on a page has a lot of causal potential if taken up and acted upon by a feeling mind. But if taken up and acted upon by an unfeeling robot (or by a feeling mind, but without feeling it), the sentence is not mental.
Searle's 1990 argument in BBS is that an unconscious state is mental if it is potentially accessible to consciousness. By my lights, that means it's mental if it's potentially felt. I would rather say it's mental only if it's actually felt. (The pinch you gave me while I was ranting about the definition of consciousness may have been potentially felt, but I didn't feel it; so it isn't and wasn't mental, though perhaps with a few attentional switches swapped, it could have been. Ditto for the sentence on the page that my eyes wandered over whilst I was thinking of something else; or even the inchoate thought that I missed thinking because I was in the thralls of a rant about the meaning of consciousness…)
AGAIN: I NEED BECAUSE OF LENGTH TO BREAK MY POST IN TWO PARTS.Delete
Reply to Stevan--about what makes a state mental:
I don't think it's quite right to say I think that it's a bunch of things that make a state mental. I do think that there are two families of mental properties--intentional properties and qualitative properties. And a state's having properties from at least one of those two families suffices to make it mental.
One could raise a question about why a state's being mental, which one might think of as a unitary matter, should be exemplified by two distinct types of property. I don't think that that's such a problem. Both intentional and qualitative properties are representational. The intentional content content of thoughts, desires, expectations, doubts, and the like, e.g., all represent whatever that intentional content is about. And the qualitative character of sensations of red and of a good wine represent the physical properties of something's being red and the physical property of being a good wine, just as the qualitative character of pain represents disturbance or damage to a part of the body. Part of what unifies intentional and qualitative properties is their representational character.
Another part is that states that are mental in virtue of having properties belonging to one or both of those two types, though they are not always conscious--in Stevan's terms not always *felt*--are all such that they are type of state that can be felt. That's a second characteristic that unifies a state's having one or the other of the two types of property. And these two unifying factors are enough to undergird our sense that a state's being mental is a unitary thing.
Stevan denies this, writing that "[i]f an internal state is not being felt, it is not mental." And he writes that ""Mental" (as applied to "state") is synonymous with "conscious," "aware," "experienced," "subjective," "qualitative" etc. etc. as applied to "state."" I am not sure what is *substantively* at issue here--as against a merely verbal issue. What is it that would be different about the world--as against Stevan's and my uses of words--that would make Stevan right and me wrong, or conversely? I don't see anything.
Stevan thinks I use weasel words in my account of things; I don't see anything at all unclear about my uses. But my use has an advantage his lacks: Mine usage highlights the way states we are aware of ourselves as being in resemble in very salient ways states that we have third-person evidence we are often or sometimes in, though without first-person access to our being in them.
Stevan thinks t what I call an intentional state is, if one isn't aware of it in a first-person way, "merely a state that can be interpreted (by someone who has mental states) as being about something." I think we have substantive reasons for seeing such states that way. We shouldn't be misled by Stevan's disparaging use of 'interpret'; all that amounts to is having substantive reason to classify a state in a particular way--as having the relevant kind of representational properties--i.e., intentional content.
Stevan holds open the spector of misapplying the notion of intentional content because we have no reason to withhold it from computers if we apply it to states to which the being in question does not have first-person access. I think this is a manufactured worry. We know on holist grounds--how particular states interact with one another and with inputs and outputs--that the states computers are in don't (at least as current computers operate) interact in sufficiently rich ways to see those states as having intentionality. But contrast, there are plenty of nonconscious states people and other animals are in that do interact in sufficiently rich ways to warrant our regarding them as having intentional content.
What do I mean by holist interactions? I mean causal interactions, actual and potential, with a very great many other states and with many actual and possible inputs and outputs.
About felt and unfelt attitudes:
Here Stevan raises the worry that we may not be able to distinguish the intentionality of sentences written on a page from the intentionality I assign to states of which individuals aren't aware in a first-person way (aren't felt). Again, I think there is nothing serious to worry about. Sentences written on a page simply don't interact in the holist way with anything else, though they do, when read, have one-on-one causal ties with actual mental states in the reader's mind (and when written such ties with intentional states in the writer's mind).
RICH INTERACTION POTENTIALDelete
"Both intentional and qualitative properties are 'representational'":
"Representational" is alas another weasel-word! Felt or unfelt representation? If unfelt (a picture, a text, a computational or robotic or neural state) it is just a state. Nothing mental about it at all.
"Mental states are not always felt but a 'type' of state that can be felt":
Sounds like potentially mental states, rather than actually mental states. As for mental potential: who knows, this carbon atom might be a part of a potentially mental state, even if it's in a fossil, cadaver or oil-spill…
"What is substantively at issue -- as against merely verbal?"
An explanation of how and why we feel.
"States we feel resemble states we don't feel":
Resemble them how?
"Holist interactions are actual and potential causal interactions with many other states and many actual and possible inputs and outputs":
What (apart from the robotic Turing Test [T3]) is the test of whether states are sufficiently interactive. (And T3 is all about doing, not feeling.)
"We know (current) computers don't interact in sufficiently rich ways whereas plenty of non-conscious states in people and other animals do":
What (apart from the robotic Turing Test [T3]) is the test of whether states are sufficiently "rich"? (And T3 is all about doing, not feeling.)
"Sentences on a page don't interact in the holist way with anything else, though when read they do have one-on-one causal ties with actual mental states in the reader's mind":
When read into a feeling reader's head, sentences become part of mental states (like the carbon atom).
There may not be a one-liner about what it is for a state to be mental, and the quest for a one-liner may midlead.Delete
i didn't say that anything representational is mental. I said that being representational is one (of two) aspects that intentional content and qualitative character, construed as properties of states of people and other creatures (almost certainly computers of the not too close future). It was not part of a definition of mental on its own; I offered intentional content and qualitative character for that, and offered representational character as what those two kinds of property have in common. that they have it in common with other things isn't relevant.
Stevan says that my saying that mental states are not all conscious, but rather a type that can be conscious, "[s]ounds like potentially mental states, rather than actually mental states." What is there about the world, as against a propensity to use words in one way or another, that would settle that issue?
Stevan replies to this question by saying that what is substantively as issue and not merely verbal is "[a]n explanation of how and why we feel." Answer: States that we are in, which are not in themselves or always conscious states, sometimes are. The explanation of how and why we *consciously* feel will consist in explaining how and why some of those states are conscious. Simply having as a desideratum "a]n explanation of how and why we feel" does not decide between the view that the term 'mental' applies only to states that are conscious (felt, as Stevan puts it) vs. the view that it applies to states that can be conscious but often aren't.
Stevan asks how nonconscious mental states resemble conscious mental states; in their qualitative and intentional properties.
Maybe the robotic Turing test is a good way to tell whether the holist interactions a state has with other states and with inputs and outputs are rich enough to count as mental. I'm neutral about that. And, yes, of course it's not about only conscious mental states, but about mental states generally, both the conscious ones and the ones that aren't conscious ("felt").
Stevan writes, "When read into a feeling reader's head, sentences become part of mental states (like the carbon atom)." I'm not sure I understand. The sentence remains on the pages; it isn't read "into" anybody's head, but read *by* somebody. Its semantic (and possibly other) properties are represented in the reader's head. The sentence has very few interactions with other states, and none to speak of with )other) inputs or directly with outputs.
THE MARK OF THE MENTAL (1 of 3) Reply to David Rosenthal (DR)Delete
"Representational" has the same problems as "intentional": It comes in two flavours. Mental (felt) and not (unfelt).
The distinction is along the same line's as Searle's intrinsic intentionality vs. extrinsic or derived intentionality. A sentence or an image or a thought or a proposition are not "about" something unless a feeling entity is actually saying, seeing, thinking or meaning them. And it feels like something to be saying, seeing, thinking or meaning something.
Otherwise a sentence or image on a page or inside a computer or robot, or an unfelt internal state inside a feeling entity that is systematically interpretable as being a thought or proposition, but not actually being felt, is simply an internal state, as in a toaster or teapot: nothing mental about it at all.
DR: "What is there about the world, as against a propensity to use words in one way or another, that would settle that issue [of whether or not unfelt states are 'mental'): ?"
The only issue about which there is a fact of the matter is whether and when an entity has felt states. (The feeler knows for sure.) What we decide to call states other than felt states is, as David says, a matter of word-choice (except if we decide to call them "felt" in which case we can only call them felt if and when they are indeed felt!)
If there were only unfelt states, there would be no "hard" problem, just toasters, teapots, and darwinian zombie-organisms, including talking ones,
DR: "States that we are in, which are not in themselves or always conscious states, sometimes are (conscious)."
Conscious is again a weasel word here. The above sentence would not even make sense if we unambiguously used "felt" instead of "conscious":
"States that we are in, which are not in themselves or always felt, sometimes are (felt)!"
The only states that are felt are the states that are felt. An unfelt state is unfelt. If it "resembles" a felt state (say, shares some of its neural substrate), that's interesting, but only because it focuses the mystery on why and how the neural difference between the unfelt state and the felt state makes the felt state felt!
DR: "The explanation of how and why we *consciously* feel will consist in explaining how and why some of those states are conscious."
Again, the weasel-word, creating what looks like alternatives out of synonyms:
The unambiguous way of putting it is "The explanation of how and why we feel will consist in explaining how and why some states are felt."
The only way to feel is consciously. There is no "unconscious feeling" [unfelt feeling] (though there can be unconscious states and processes, as well as unconscious responses and capacities, neural and behavioral).
DR: "an explanation of how and why we feel" does not decide [whether] 'mental' applies only to states that are conscious (felt, as Stevan puts it) [or] to states that can be conscious but often aren't."
"Mental" is yet another redundant weasel-word. David wants to use it for internal states that are somehow "potentially" felt, or potentially "part" of states that are felt.
We could by the same token say that they are only "potentially" mental, or potentially "part" of states that are mental.
Or we could just throw out the redundant weasel-word "mental" and say states are felt if they are felt, and unfelt if they are not: "potential" and "parts" have nothing to do with it.
[1 of 3, continued]
THE MARK OF THE MENTAL (2 of 3) Reply to David Rosenthal (DR)Delete
DR: "Stevan asks how nonconscious mental states resemble conscious mental states; in their qualitative and intentional properties"
Unfelt states have no qualitative properties. And intentional properties are merely derivative if/when they are unfelt, the way the interpretability of a sentence in a book or a state in a computer or a VR simulation is parasitic on the (felt) state in the head of the reader or viewer.
DR: "Maybe the robotic Turing test is a good way to tell whether the holist interactions a state has with other states and with inputs and outputs are rich enough to count as mental. I'm neutral about that."
The only thing the robotic TT can do is show you what states and processes are sufficient to generate behavioral capacity (doing) indistinguishable from that of a feeling human being: interacting with the world, interacting with other human beings with words and other doings.
The actual nature and richness of the "holist interactions" of internal states with other internal states, and with inputs and outputs awaits the findings of future cognitive science (and progress on designing models that can pass the robotic TT).
It is not clear to me how something so vague, let alone some hypothetical continuum of "richness" can tell us what does and does not count as mental. Correlations there will certainly be, between our felt and unfelt states, and out brain's internal states. There will also be such correlations with the TT robot, though we may be inclined to be a trifle less confident about whether it is indeed feeling, when it says and behaves as if it is. Turing recommends giving in the benefit of the doubt, faute de mieux, and I incline the same way.
Perhaps at Turing scale we will have an idea of what the continuum of "richness: underlying "holistic interactions" actually consists in, functionally speaking, if there is indeed such a continuum. We may even find the cut-off point along that continuum where feeling actually kicks in, if we can take the robot's word for it (and we should). But that will be just the same as if we find the neural correlates of unfelt and felt states: Whether on a continuum with a threshold between felt and unfelt, or simply functionally different state sharing some features and components and not others, we still will not have addressed the hard problem of explaining how and why the felt ones are felt.
[2 of 3, continued]
THE MARK OF THE MENTAL (3 of 3) Reply to David Rosenthal (DR)Delete
DR: "It's not about only conscious mental states, but about mental states generally, both the conscious ones and the ones that aren't conscious ('felt')."
This is unfortunately back to splitting synonyms: "It's not about only felt internal states, but about internal states generally, both the felt ones and the ones that aren't felt."
Well, yes, but I think we've already agreed that it's just a matter of word-choice if we decide to call the unfelt internal states of a feeler "mental" just because they're going on inside of a feeler rather than a teapot.
Not so for "felt," about which there really is a fact of the matter (but you have to be the feeler in order to feel it).
DR: "Stevan writes, 'When read into a feeling reader's head, sentences become part of mental states (like the carbon atom).' I'm not sure I understand. The sentence remains on the pages; it isn't read "into" anybody's head, but read *by* somebody. Its semantic (and possibly other) properties are represented in the reader's head. The sentence has very few interactions with other states, and none to speak of with (other) inputs or directly with outputs."
The sentence "The cat is on the mat," when it appears on this screen, is not part of a mental state. When you see and understand it, so that you are thinking "The cat is on the mat" then it is part of a mental state, because it feels like something to think "The cat is on the mat." (Searle's extrinsic or "as-if" intentionality vs. intrinsic or "original" intentionality again, though I don't really like that dichotomy: Unfelt vs. felt "meaning" is better.
Bref, Brentano was mistaken. The mark of the mental is not intentionality but feeling. To have a mind is to feel. And an internal state -- even an internal state in an organism that is capable of feeling -- is only mental (felt) if and when it is being felt. The rest is just internal states and processes, as in a robot, or teapot.
I do insist that intentional and other representational states an occur consciously or not (felt or not in Stevan's terminology). I don't accept that Stevan and Searle are right that sych states aren't just as mental in nonconscious as in conscious form. There's no issue about what makes them mental when they are conscious; teapots are in qualitative states in that they aren't responsive to stimuli in ways that allow fine-grained discriminative responses (see my "How to Think about Mental Qualities," on http://tinyurl.com/drpubn). Similarly with intentional properties; teapots don't have mental attitudes toward a range on intentional content. Nonconscious intentional properties are not Searle's derived intentionality; nonconscious intentional states have intentional content and mental attitude because of causal and dispositional ties with other is (perhaps none of them conscious) and stimuli and inputs.Delete
Stevan objects that this is not a strict line. But it's not obvious that 'mental'--as opposed to 'conscious'--aaplies in a strict, on-or-off way. There can be and are many borderline cases. But the existence of borderline cases doesn't mean that there are overwhelmingly many clear cases.
Stevan follows Searle in holding that nonconscious mental states are "internal states that are somehow "potentially" felt, or potentially "part" of states that are felt." That's simply not hoew I'm using the terms. Read the foregoing artoicle and my "Intentionality," available on http://gc-cuny.academia.edu/DavidRosenthal/About.
Using consciousness as a necessary condition for states that be mental rules by by unsubstantiated fiat a very great deal from research projects--research into state that have everything in common with conscious mental states except for being conscious. And thertrends no need for such a shortcup, one-stop mark of the mental; it can be the more complex mark I have argued for. That's what I had in mind in describing it as a a verbal matter.
Stevan's "translations" of 'being conscious' into 'being felt' aren't neutral; they create a sense of paradox where there is none. There's no paradox in a state's being mental but not conscious or not felt--i.e., one isn't aware of the state; there is a paradox in a state's not being conscious but being felt.
Stevan and I agree that written or spoken sentences are not part or aspects of intentional states; he follows Searle in seeing such sntds as analoguous to intentional states that aren't conscious; I deny that that's at all a useful analogy, relying for nonconscious intentional states instead the factors I list above.
Nobody doubt that explaining intentional content and mental attitude and explain mental quality is a serious explanatory challenge. I believe I have made good process, as have a number of others in the field. Stevan's use of being felt to distinguish mental from nonmental is using an uninformative one-liner (being felt) for the hard work of explaining intentional content, mental attitude, and mental quality.
For the perplexed reader, here is what the disagreement between David and me hinges on:
Let us call unfelt internal states, occurring inside the brain of an organism that can feel, "zombie states."
The reason I call them zombie states is that I think we all agree that if an organism's brain had only such states, and could still do and say everything that a normal human being can do and say, then it would indeed be a zombie (i.e., a Turing-Test passer [T3] that was indistinguishable from us, yet did not feel).
Now let us remember that the "hard" problem is explaining how and why feeling organisms feel. So for an organism that had only zombie states, there would be no such problem. All that would need to be explained was how it could do and say everything it could do and say ("easy" problems!).
Now back to reality: The human brain can indeed feel. And it has both zombie states and non-zombie states.
So the disagreement with David is over what to call some of the brain's zombie states. I think David wants to call those of them that share some of the properties or parts of felt states "mental states", whereas I'd rather reserve "mental" only for the felt ones. (The hard problem is, after all, traditionally called the mind/body or mental/physical problem!) David replies no, that's simply not how he is using these terms.
But I do agree completely with David that this is purely a verbal matter.
[By the way, none of what I've said implies that I believe that a zombie could pass the Turing Test.]
I don't think I quite agree with Stevan about where he and I differ. I don't myself think it's reasonable to stigmatize nonconscious mental states as zombie states; they're mental, according to me, in every way that conscious ("felt") mental states are: They have intentional content and mental attitude or they have qualitative character, often both.Delete
If I'm concerned about whether somebody else is in one or another or indeed in any mental states, it doesn't matter to my determining that whether those states are conscious ("felt"); I have nothing to go on but that individual's behavior. That includes of course verbal behavior--possibly being apparent testimony to the individual's being in one or another conscious state; but robots and so forth can engage in such verbal behavior.
I think Stevan and I are agreed that whether to call nonconscious states with intentional or qualitative properties mental is a verbal matter. Given that, I don't understand his inistence on the more restrictive usage. My more inclusive usage has the advantage of connecting more phenomena together.
Unconscious qualitative character? Who's enjoying the quality? And in what does the quality consist, if it is not a feeling?Delete
On his way to the conclusion that conscious states do not have utility in virtue of being conscious, Rosenthal claimed that there are many conscious processes that are less efficient than similar unconscious processes. (If I remember correctly, he cited the work of Dijksterhuis that shows that consumer make better choices when their volition is the result of non-conscious reasoning.) But I feel like many of the things done in the context of the heuristics and biaises research program by Tversky and Khaneman show that at least in specific circumstances we can make much better decision by relying on conscious processes. We have many non-conscious heuristics (producing stuff like anchoring effects) that can mislead us in very dramatic way when we make decisions. This is especially clear when it comes to logical reasoning (Wason selection task and so on).... However, even if I am right, Rosenthal might be still right that it is not in virtue of being conscious that conscious processes can be more efficient in some cases.ReplyDelete
Several things: (1) My understanding is that the Wason logical experiments and similar work apply just as well to conscious cases of reasoning. (2) Just because efficacious reasoning stimes occurs consciously is not sufficient to show that its being conscious is implicated in the degree of efficiwncy. It might be that demands on reasoning produce *both* awareness of the reasoning *and* efficiency of reasoning, but that those are independent effects.Delete
Bargh, Usher, and others have findings that support Dijksterhuis, whose methodology has been open to challenge; the others' use methodology not questionable in those ways.
Rosenthal a parlé de l'apprentissage du tennis.ReplyDelete
J'ai beaucoup de difficulté à saisir un moment où ce genre de sport pourrait devenir possible sans l'apport de la conscience. Je comprends le point de vue que la conscience n'est pas toujours utile, mais refléter cela en donnant l'exemple du tennis...
Le tennis ne se résume pas à essayer de toucher une balle avec une raquette. Je n'arrive donc pas vraiment à saisir comment un comportement (frapper la balle de manière à déjouer l'adversaire) peut être indépendant du "readiness potentiel".
I'm not certain I understand; my French is very porr. My apologies for thatDelete
The tennis case: Mrng a new skill and set of motions certainly requires close attention and control of behavior. WQh's needed is a reason to think that attention and control cannot occur without conscious mental states. I know of none. There is now a huge literature demonstrating that attention of all sorts occurs without the attentive mental states' being conscious (see work by Koch et al, Tsuchiya et al, Kentrdiege et al). Similarly for control.
It may seem to these things occur only consciously; but that subjective impression must not be relied upon, since our subjective impressions only extend as far as those mental states that *are* conscious, and do not at all apply to mental states that are not. So introspective subjectivity cannot be any guide at all for these issues--abtr whether mental states' being conscious is needed for one or another task.
My apologies, but I still have the feeling that, in the absence of some explanation of what the causal incidence of conscious mental states could be, Doctor Rosenthal is committing himself to a form of epiphenomenalism. If conscious a mental state, M, is a higher-order thought about another mental state, N, one would suppose that M would have to occur after N. For M to be about N, N must already be instantiated in some way. The higher-order state always occurs after the fact, and hence, M cannot have any causal role. He argues that they could have some causal incidence despite having no determinable functional role, yet I do not see where, how, or when such an influence could exert itself.ReplyDelete
Maybe I’m missing something.
There are very many things that have causal efficacy but no utility. Most occurrences are like that. If having no utility is what you mean by epiphenomenalism, yes, I'm arguing for that. But I don't see what the reason is to be back merely because the word is applied.Delete
On the other handm 'epiphenomenalism' in philosophy means having no causal efficacy; that is, I agree, a silly view. But I am in no way committed to it. Why would having no utility mean having no causal efficacy? Not everything that happens--even in the mind--is useful.
Please note what I stressed in my talk: My arguments against utility do *not* in any way depend on first adopting the higher-order-thought theory. My arguments were all independent of that.
But the higher-order thought (if my theory of consciousness is correct, and there are higher-order thoughts) can have a causal role--what it's causal role is needs to be invesitgated.
How could a state's being conscious have causal influence but no utility? It's being conscious could be responsible for causing other mental occurrences--but ones that are not especially beneficial to the organism.
It's too bad Dr. Rosenthal did not have time to talk about hypnosis. That would have been interesting.ReplyDelete
Here's what I could have said if I had not skipped my slkide on hypnosis:Delete
An example of executive function that is not conscious may well occur in hypnosis.
Actions performed under post-hypnotic suggestion involve no awareness of an intention to perform them, and no conscious sense of their being voluntary (Hilgard 1977; Spanos 1986; Oakley 1999).
Subjects are unaware of planning these actions often require (Hilgard 1977; Sheehan and McConkey 1982; Spanos 1986; Oakley 1999).
Zoltán Dienes and Josef Perner (2007) explain all this as executive function that occurs without suitable HOTs: Hypnosis results in nonconscious executive function.
Do have a look at the Dienes-Perner article:
Dienes, Zoltán, and Josef Perner (2007). Executive control without conscious awareness: the cold control theory of hypnosis. In Graham Jamieson (Ed.). Hypnosis and conscious states: the cognitive neuroscience perspective(pp. 293-314). Oxford: Oxford University Press.
Thank you Dr. Rosenthal.Delete
It is interesting though, that Dr. Raz's opinion on hypnosis at his talk today clearly was that of the hypnosis being an hyper-attentive state that is fully conscious. They may be unaware of some sensory stimuli that are physically present when they are told that they don't exist but they are completely conscious of what they are doing and feeling/seeing.
I'm sorry I missed that talk. But I think the issue about hypnosis is not whether subjects are consciously aware of what they're doing, feeling, and seeing, but whether they're consciously aware of the hypnotically induced volitions as a result of which they're doing those things. In typical action based on post-hypnotic suggestions, subjects are unaware of volitions that have been formed from instructions given under hypnosis if under hypnosis they've been given the instruction not to recall that they were given the volition-forming instructions.Delete
Isn't the proper answer to your question an enactive one, such as, "Huh?"
That said, if the question is asked using contrastive analysis --- meaning essentially the experimental method applied to consciousness as a variable --- the answers simply leap out. That is also true for any other biological adaptation --- does walking have any utility? When compared to the various kinds of paralysis of the legs, the answers fairly leap out, within a Darwinian framework.
This kind of analysis becomes non-trivial whenever there is a genuine empirical mystery --- what is the Darwinian function of serotonin? etc. What is the function of spontaneous empathy in infant-mother bonding? And so on.
You will argue no doubt that subjectivity is different from other neurobiological functions. It is in the sense I argued with Stevan who posed the Qualia Quiddity early on. Here's the answer I think will work, though it needs a little more work (the specifics for some consciously perceived dimension of variation).
I think the key to Qualia actually lies in the interaction of the observing executive ego of the prefrontal cortex (with links to parietal egocentric maps, for example) with allocentric (other-attributed) sensory input. Conscious objects of experience, like coffee cups in peripersonal space require the interaction of those two systems, the ego/context hierarchy and input that requires some adaptive processing from that system.
In my 1988 book (which everyone should have memorized by now) I argue that observing ego functions are coextensive with contextual frames for qualitative experiences, and that the actual conscious experience of red objects involves reduction of degrees of freedom within the contextual color system as well as the egocentric/allocentric spatial maps.
What makes subjective qualia different from mere conjunctions of features is the interaction of the extended ego-frame system (the context hierarchy of my 1988 book) with unpredictable input. That's why you need the information reduction, a kind of Piagetian accomodation along the dimensions of subjective experience (e.g., psychophysical dimensions). In a sense all input that is experienced as subjective shakes up the entire dominant context hierarchy. In current neuroscience jargon, it requires "updating."
I have not developed this idea beyond the 1988 chapters on context, but it should be fairly straightforward to do so.
Maybe the time is ripe to do that now.
Not sure I get your enactive joke. I think that'rs a real question here, and people do take different sides on the question, even if my side is in a distinct minority.
But two comments about that. Most researchers who *say* they're studying consciousness, are not studying anything that has to do with whether or why mental states states sometimes do occur consciously. They're studying only that aspect of mental functioning that consists in mental states resulting in our being conscious of thing.
One might hold--as Stevan and Nagel and Searle and others do--that one won't be conscious or aware of things except by virtue of being in mental states that are conscious states. But that's a substantive thesis; it doesn't simply, as you seem to suggest, go "without saying." And I am arguing that this substantive thesis is not true.
Onto other points.
If by constrative analysis you mean compare cases when mental functioning is conscious with those in which it isn't, that's simply not good science--at least if we want to understand what the *role* of the states' being conscious is. It might be, as I argued explicitly in my talk and in my Neuropsychologia article, that in the cases in which mental functioning does occur consciously, their being conscious is not itself playing any particularly useful role. It might be that the utility and the states' being conscious are *independent*, *joint* effects of a single process. That possibility must be ruled out.
In addition, it increasingly emerges from a very great deal of experimental work--much cited by Hakwan in his talk Saturday morning--that mental functioning that typically occurs consciously (or in any case subjectively seems to--which is no evidence at all), sometimes occurs without being conscious. So things are a very great deal more complicated than your appeal to contrastive analysis suggests.
And, no, I don't think the case of conscious qualitative mentality is different from other cases.
Finally, about the executive ego, there is ample evidence that it also functions without any relevant *conscious* mental states.
On the question of unconscious knowledge in all its varieties, an empirically adequate answer was published in my 1988 book, A Cognitive Theory of consciousness, based on extremely well-established facts of selective habituation of conscious contents. The most famous (in physiology) argument was first advanced by the Russian physiology YE Solokov, and it applies by generalization to major empirical phenomena like stopped retinal images, the Ganzfeldt, semantic satiation, and the unconsciousness of presuppositional knowledge in general. These are extremely general and fundamental empirical phenomena. I believe the Gestaltists were aware of them, and they led to Adaptation Level theory, another empirical theory of great generality. Currently it appears in the work on vision by Susana Martinez-Conde and Steve Macnick.ReplyDelete
My 1988 book has now been re-published on Kindle Books for about 10 dollars. (Amazon wants me to prove I'm the author, which is another annoying glitch.)
If anyone wants the pdf version, let me know at BAARS AT NSI DOT EDU.
These arguments and the evidence for them are really quite well-established, if one is an inductivist. In my view, problems like vitalism and mind-body debates are only ever solved inductively. You don't know the answer until you find a rigorous empirical approach, and then you work it out.
I'm not sure how to reply to this, Bernie. You refer to things without giving me enough to address in a specific way. Perhaps continue here or elsewhere?Delete
David Rosenthal said that the occurence of consciousness did not rely on its beneficial properties but rather on something else, am I correct? But can we really say so? if a trait keep occuring through evolution, isn't it because it has, at some point, some beneficial properties?ReplyDelete
I agree with that Pauline! If we didn't need consciousness to attend to our mental state to survive/to be more adapted to our environment, it would surely have disappeared troughout the years. But I'm still puzzled by the study where they showed that we make better consumer decisions without consciousness and that consciousness can have a deleterious effect on the decision. How can that be? I know we do solve some problems while we are not attending to it (like when you search for a name and stop thinking about it and then it pop back), but taking a consumer choice? Dr. Rosenthal could you give us the reference to this article?Delete
Several things. (1) The fact that mental states' are conscious does not by itself show that their being conscious has anything at all to do with evolution. It could be that all that evolved was the capacity for mental states' to be conscious, and developmental factors lead in each individual to their being conscious. One can't simply assume that mental states' being conscious is like having two arms and two legs--simply part of the genetic endowment. It could be like walk: Our genetic endowment is to have the capacity, and we come to be able to exercise that capacity.Delete
(2) Evolution conserves; things rarely get dropped. So if something evolved by accident--i.e., without any help from selection pressures, it's very un likely to disappear.
(3) We don't yet know how much of the evolutionary process is due to adpative advantage and how much to how the DNA tends on its own to change, independent of its phenotypic effects. There's strong evidence that some recombination of DNA strands are preferred over others; we don't know--but simply *assume*--that mutation is rely random; it may well not be. Assuming that evolution proceeds by natural selection is simply black-box rsnbg in the absence of any knowledge of internal mechanisms.
About consumer choices:Delete
There's lots of work on this.
Dijksterhuis, A. (2004). Think different: The merits of unconscious thought
in preference development and decision making. Journal of Personality &
Social Psychology, 87(November (5)), 586–598.
Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & van Baaren, R. B. (2006). On
making the right choice: The deliberation-without-attention effect. Science,
311(February (5763)), 1005–1007.
This was criticized for methodological reasons in
Waroquier, L., Marchiori, D., Klein, O., & Cleeremans, A. 2010 Is It better to think unconsciously or to trust your first impression? a reassessment of unconscious thought theory. Social Psychological and Personality Science 1, 111-118.
but replicated without those methodological difficulties in
Usher, M., Russo, Z., Weyers, M., Brauner, R., & Zakay, D. 2011 The impact of the mode of thought in complex decisions: intuitive decisions are better. Front. Psychology, 2:37. doi: 10.3389/fpsyg.2011.00037
See also lots of work by John Bargh and colleagues.
> See also lots of work by John Bargh and colleagues.Delete
Which we also failed to replicate in Doyen S, Klein O, Pichon C-L, Cleeremans A (2012) Behavioral Priming: It's All in the Mind, but Whose Mind? PLoS ONE 7(1)
I for one, even though I am convinced there are indeed lots of unconscious influences on behavior, have become convinced that most of this evidence is very problematic. Claims by Hawkan for instance that one can do mental arithmetic without consciousness are very much overblown in my view. What happens is that (perhaps) you get very small priming effects (a few ms) that last for a few hundred ms and only for certain over learned arithmetic facts such as 7x6 or 3x2. I'd be convinced if one could compute 63x78 without awareness... On the other hand, it can clearly be done as any pocket calculator readily demonstrates. But this is no proof that in humans, it can be done without awareness.
There are several issues here.Delete
(1) I might agree that the mehy in Dijksterhuis et al and Bargh et al is not the best; but failing to replicate just raises the question of whether one can replicate some other way.
Waroquier, L., Marchiori, D., Klein, O. & Cleeremans, A. 2010 Is It better to think unconsciously or to trust your first impression? A reassessment of unconscious thought theory. Soc. Psychol. Perspect. Sci. 1, 111–118,
failed to replicate
Dijksterhuis, A., Bos, M. W., Nordgren, L. F. & van Baaren, R. B. 2006 On making the right choice: the deliberation-without-attention effect. Science 311, 1005–1007
Usher, M., Russo, Z., Weyers, M., Brauner, R. & Zakay,D. 2011 The impact of the mode of thought in complex decisions: intuitive decisions are better. Front. Psychol. 2,
arguably more or less succeeded. We may succeed yet.
(2) I don't, by the way, find myself able to do that computation, in my head; I need paper and pencil. Are you suggesting that most people can do it in their head consciously but not without conscious awareness? That seems implausible.
(3) Even if multiplying 63 x 78 never occurs in humans without awareness, it need not be that that's because the relevant states' being conscious makes any contribution to the computational process that yields 4914 (I used a calculator) as the product. It could simply be the greater computational demand in such a calculation places on the mental processes result in greater neural signal strength for the computational processes, and that greater signal strength results in the states' being conscious. It need not be that the states' being conscious makes any contribution to the process that delivers the product. Cooccurrence is by itself no evidence whatsoever of causal role.
(4) You might reply, Axel, that it's the best evidence with have, at least right now. I'm arguing it's no evidence whatsoever without something additional that points to a causal role that the states' being conscious has in performing the calculational process.
(5) And there are theoretical reasons to think we won't find that causal role: The calculation process rests on the intentional content of the intentional states that figure in the calculation. We have experimental evidence from other cases that intentional content can occur without the relevant states' being conscious. Since intentional content occurs without consciousness, we have reason to suspect that consciousness plays no role in mental processes that rely on intentional content.
(6) Vis-à-vis Hakwan's claims for nonconscious mental arithmetic: As Hakwan himself points out, it's hard to get robust cases of any of these nonconscious processes, because robustness in the case yields, as I noted in (3), greater neural signal strength, which in turn tends to cause the states to be conscious.
(7) We can't simply dismiss this experimental and theoretical concern. If all that's happening in the conscious cases is that consciousness is coming along for the ride because of greater neural signal strength, we have no reason to conclude that consciousness plays any role whatsoever in the process. Failure to replicate doesn't address this; we need experimental findings deigned to distinguish the possibility that consciousness does play some relevant causal role in the relevant mental process fm the possibility that it doesn't.
Hypnosis provides a valuable way of disentangling some of the confounds one often gets in consciousness research. In this case, consider Stevenson (1976) J of Abnormal Psychology, who had people add 7 repeatedly to an initial two digit number, apparently unconsciously (i.e. the task was performed with automatic writing). People got 26 correct answers in a minute under these conditions (admittedly worse than the 31 correct performed consciously).Delete
You might ask- why should we believe subjects performed this task unconsciously? The general answer is that in lab conditions highly hypntoisable subjects pass a number of honesty checks (GSR lie detection methods, continued performance while they believe they are alone, relevant brain regions lighting up corresponding to their reports). These honesty-checking methods have not been applied to this particular case, so the matter is not closed. Nonetheless highly hypnotisable subjects (in lab - not stage - conditions) generally tend to prefer to openly fail tasks rather than mislead about their experiences.
My very many thanks to Zoltán Dienes (a.k.a. "unknown" for this post. I think it's very illuminating. Hypnosis is an unusual condition, but it strongly and usefully supplements the strong evidence we have from other quarters, experimental, theoretical, and folk observational, for the occurrence of very rich, robust mental processing that isn't conscious.Delete
COMMENTS COPIED AND PASTED FROM THE CORRESPONDING POST ON FACEBOOK :Delete
ANDY NDK :
"I would also advocate some benefit with conciousness, however at least it has been prooven not be disadvantageous.. ;)"
PIERRE VADNAIS :
"If consciousness is necessary for, and co-evolved with, language, let's say it gets a free ride on language benefits."
That's not really so; there is evidence, which I cited, that when mental states in reasoning are conscious, that interferes with reasoning success. E.g., Dijksterhuis and Usher. Similarly for work by et al on grammar learning--indeed, for so-called statistical learning altogether.Delete
As for coevolving with language, I know of no evidence, and there's reason to think that that connection isn't reliable--if you're talking about a connection between language and the mental states that speech acts express being conscious states.
David also mentioned that we didn't need to be conscious of a stimulus to be aware of that stimulus. But doesn't it contradicts Stevan Harnad's view of consciousness who defines consciousness as being a synomyn of awareness?ReplyDelete
Maybe I'm wrong, but I don't think that Prof. Harnad believes consciousness = awareness, necessarily. "Consciousness=feeling" and feeling is different from awareness (although they are often tied together tightly). You can be aware of the person sitting next to you, but you don't have to feel anything towards that person or feel anything as a result of them sitting next to you. My understanding of feeling is more congruent with emotion. Awareness could lead to feeling or come together with feeling, but they are not necessarily synonymous.Delete
Being conscious of (or that) X = being aware of (or that) X = feeling (or feeling-that) X. Those who try to give "awareness" a distinct meaning from "consciousness" (in such notions as unconscious awareness, unconscious knowledge, unconscious perception) are merely talking about the unconscious (unfelt) detection or possession or processing of information (data). But although my brain may detect, possess or process data without my being conscious of it, it is equally true that I am not aware of it, and I do not feel it.
Best to stop trying to pry apart synonyms that are in any case weasel-words. Calling feeling "feeling" will never betray you, nor lead you into question-begging, irrelevance, empty semiology or absurdity.
So, you agree that there is something to be distinguished: unconscious detection (as in subliminal cues) is different from feeling. Both can lead to action.Delete
On the other hand, it is not easy to reconcile your "there's not even any way to make sure anyone but oneself feels" and your blind faith assertion "Because feeling matters (and it's the only thing that matters.) And animals feel." You can't eat your cake and have it too.
Actions can be produced consciously or unconsciously and we cannot make the difference. Why bet that animals' actions are conscious and robots' actions are not? Because animals are biological like us? Then your criterion is not consciousness, it is biology... and it should also apply to plants.
Well, I do think my views and Stevan's are not compatible! But that's not by itself a reason to reject my views! (Nor, of course, by itself reason to reject his views.)Delete
I distinguished three uses of the term 'conscious'. People and other creatures are sometimes conscious; their thoughts, sensations, perceptions, and so forth, are sometimes conscious; and people and other creatures are sometimes conscious *of* things.
You can tell that these are three distinct uses because 'not conscious' in the three cases clearly applies to very different things: in the first case, people or creatures' being asleep or comatose or anaesetized, and so forth; in the second, 'not conscious' applies to thoughts, desires, perceptions, and so forth of which the creature is wholly unaware (except possibly in a third-person way); and in the third case a creature's being unaware of the thing in question.
In the third case, 'conscious' and 'aware' are the same.
But not in the second. In subliminal perception, e.g., we are aware of the stimulus; otherwise it couldn't affect downstream psychological processing. But we aren't *consciously* aware of it. Those two are distinct phenomena.
I myself think 'feel' and felt' and 'feeling' betray us very often. They apply equivocally to mental states of particular sorts--e.g., bodily sensations and emotions--and to mental states' being conscious. One could have a theory--as I take it you do, Stevan--that being mental never comes apart from being conscious. But one could have a theory on which they do come apart, as I do. Being mental is having intentional or qualitative properties, and being conscious is being aware of a mental state in a way that is subjectively independent of inference or observation, i.e., subjectively unmediated.Delete
COMMENT COPIED AND PASTED FROM THE CORRESPONDING POST ON FACEBOOK :Delete
PIERRE VADNAIS :
"Good to hear that this is what he said, I thought I had heard "possible to be conscious without being aware"... Patrick Haggard's subliminal cue puts things in perspective, the signal is sensed but not identified. Sensing is sufficient for awareness, but not for consciousness. In that context, awareness is limited to recieving a signal without identifying it; i.e. getting the signal but not the semantic. If I remember well, Shimon Edelman presented the difference between raw vision and conscious vision. The first was strictly visual signals, the second was reduced to identified (named) object. Obviously, only humans can "name" objects. Signals are sufficient for sensorimotor reactions.
For animals, it is all subliminal cues. Once we have consciousness, it is very difficult to imagine how that can be. Refer to my poster: "Consciousness is knowing that perception is a representation of reality".
I guess that means there is no "What" branch in non-human perception... Or that "What" branch is strictly limited to categorization without identification."
STEVAN HARNAD :
"All David can mean is that we need not be conscious of stimulus (we need not feel it) for our brains to be able to detect it, and even respond to it."
PIERRE VADNAIS :
"This means that animals do not have to be conscious of stimuli (need not feel them) to be able to respond... never. Only verbal reporting gives evidence that the stimulus has not only been sensed (ok detected) but consciously felt. And "Ouch!" by a worm is not verbal reporting, only your interpretation of whatever signal you were conscious of..."
STEVAN HARNAD :
"EPICYCLING AROUND THE PROBLEM: People need not feel stimuli, ever; and I can get a toy robot to do verbal reporting. All evidence of feeling is inferential and correlative mind-reading, whether verbal or nonverbal. The hard problem is to explain how and why we feel despite the fact that it looks as if there's no need for it, and there's not even any way to make sure anyone but oneself feels."
FREDERIC SIMARD :
"I wrote a post on the specific subject of consciousness vs. awareness, a few days ago, maybe you can look for it, but globally, from what I understand (and Pierre Vadnais is close to it). Subliminal priming, belong to awareness, while subject report of observation belong to consciousness... And I think consciousness should been seen as a subset of awareness (we are aware, before we are conscious)
The terminology leads to a lot of confusion and impairs argumentation. Relating together the different terms used thorough the conference, will definitely be part of my paper. (A striking example is the wiki definition of awareness: (...) to be conscious (...) )"
STEVAN HARNAD :
"Communication will not advance unless we drop synonyms (and strained efforts to distinguish them). Consciousness is identical to awareness. Only Humpty Dumpty can pry them apart."
FREDERIC SIMARD :
"Lol! I'll take that into account when redacting my paper. Although we have to recognize that several people, who presented during this conference, are using consciousness and awareness in a distinct manner and we need to think more in term of concepts than absolute word meaning in regard to these words and in the context of the present conference..."
I agree that being conscious of something is the same as being aware of it. But when Stevan says that "[a]ll David can mean is that we need not be conscious of stimulus (we need not feel it) for our brains to be able to detect it, and even respond to it," I take issue. I think we are aware of subliminal stimuli. I've given reasons above for thinking that this is a good way to describe things.Delete
I did not, by the way, distinguish being conscious of things from being aware of them; I distinguished between consciously aware of things from being aware of them but not consciously.
My own experience leads me to agree with Dr. Rosenthal. After a little while of meditation practice I now am able to follow my thoughts most of the time while awake, and I am in fact a little shocked to see that I don't seem to have any control over them. Even what I previously thought was my rational thinking, now appears to me as sentences popping out of nowhere (a little scary even sometimes, since I guess that goes deep against the cultural perception of consciousness I have been brought up with). I can't help to think that consciousness is therefore perhaps more about what's available in your working memory than about anything else, perhaps what Dr. Baars in his comment describes as "the interaction of the observing executive ego of the prefrontal cortex with allocentric sensory input". I look forward to reading Dr. Rosenthal's paper on his explanation.ReplyDelete
Thanks for that.Delete
Another presentation that might interest people:
and the presentation it was derived from,
"Higher-Order Awareness, Misrepresentation, and Function", Philosophical Transactions of the Royal Society B: Biological Sciences, special issue on Metacognition, 367, 1594 (May 19, 2012): 1424-1438, available at
Thanks for all these comments.ReplyDelete
I'll be very happy to reply these and any others when I'm back in NY--give me a day.
During your talk, you briefly mentioned and abandoned the possibility of language/report for utility of consciousness. Could you elaborate your view on this and the broader concept of social interaction/communication? It seems strange to think of communication occurring as we know it without conscious awareness, thought, and interaction. Similarly, what is your stance on empathy, as related to consciousness, and could you be able to point me in the direction of any empirical evidence for/against either of these ideas?
Two things. (1) There is relatively little communicative utility to saying that one thinks something over and above simply expressing that feeling. Thus there is relatively little communicative utility to saying that one thinks that it's raining (reporting one's thought) over simply *expressing* that thought--by simply saying that it's raining. Social communication is rarely if ever better served by saying that one thinks that such-and-such as against simply saying that such-and-such.ReplyDelete
One might think that saying that one thinks that such-and-such involves a measure of hesitation or guardedness not present in simply saying that such-and-such. But hesitation can be conveyed by saying 'maybe' and the like.
It's not often noted that reporting a thought, by saying explicitly that one has that thought, is distinct from simply expressing that thought verbally, by saying something with the same content. But they obviously are distinct types of speech act, since the have distinct truth conditions. They seem alike because they have the same *use* conditions--i.e., the same conditions of appropriate utterance.
The foregoing considerations lead to an interesting consequence, which is (2). Saying that it's raining and and that one thinks it's raining not only have the same conditions of appropriate utterance; it's second nature to us that they do. So anytime one say that it's raining, one might as readily have said that one thinks it's raining. Since we are habituated to say either one largely interchangeably, we're habituated, whenever one says that it's raining, to have thought thought that one thinks it's raining. And *that's* what make one aware of one's thought that it's raining *whenever* one says that it's raining.
So for creatures like us, who are thus habituated, whenever we say anything, the thought we thereby express is a conscious thought. So language and consciousness seem to go together.
But that's not magic; the foregoing considerations explain why they do go together--and it's the special case of creature who not only can talk and think about their own thoughts, but also are habituated to treat saying something and saying that one thinks that thing as interchangeable speech acts.
And though that interchangeability has some utility, the awareness of one's verbally expressed thoughts flows from that interchangerability does not.
See chapter 10 of my _Consciousness and Mind_, OUP 2005, for the full development of these considerations.
Sorry; I skipped the question about empathy. But I guess I'd reply with a question to you: Are you simply assuming that empathy is invariably conscious? Empathy obviously has considerable utility, as was vividly explained in Baron-Cohen's presentation. What is it about empathy that requires that it occur consciously?ReplyDelete
And even if it does for some reason occur consciously, why is its being conscious useful? Why wouldn't it serve the same useful purposes without being conscious--simply by registering in psychologically efficacious ways what mental states others are in and looking forward to appropriate psychological and behavioral reactions to that on one's part?
And finally, even if it does always occur consciously, why should one assume that its occurring consciously is in any way related to its being useful? It could be that something about empathetic registration of others' mental states and a resulting tendency to respond psychologically and behaviorally *causes* the empathetic realization to become conscious--without its becoming conscious being at all useful.
One can't trust one's subjective impressions about these things, since subjective impressions don't access mental occurrences that aren't conscious, and so can't compare them. And though subjectivity may suggest that one's mental states--of various sorts--are useful, it can't tell us that their being conscious is what's useful about them. It could be other aspects, such as their representational content and their causal propensity to cause other useful mental states and useful behavior.
Rosenthal says that during subliminal perception, we are "unconsciously aware" of the stimulus. I think that's a very confusing way to describe what is happening. We have to be more specific than that, for example by saying that we are aware of the presence of a stimulus but unaware of the identity of the stimulus. We can be aware of a stimulus to different levels of detail: mere detection, or furthermore identification, etc. It makes no sense to say that we are "unconsciously aware" of something.ReplyDelete
I don't recall using the phrase 'unconsciously aware', and it's in any case nowhere on this blog, as a quick reveals, except in Diego's post (and now mine).ReplyDelete
What I said was that we could distinguish between conscious perceiving, in which we're consciously aware of perceive things, and nonconscious or subliminal perceiving, in which we're aware of things but not consciously aware of them.
That we're aware of things subliminally is evident from the effect that the subliminal input has on our distinctively psychological processing.
The distinction between being consciously aware of something and being aware of it but not consciously is simply a useful way, employed frequently in the popular press and in scientific journals, to capture the difference between conscious and subliminal perceiving.
I agree of course that we perceive in very many degrees of detail; but that's true both of conscious perceiving and of subliminal perceiving. The two issues cut across one another.
ON BEING UNAWARELY AWAREDelete
David Rosenthal: "I don't recall using the phrase 'unconsciously aware'"
David Rosenthal: "[in] nonconscious or subliminal perceiving… we're aware of things but not consciously aware of them."
Sounds like "unconsciously aware" to me (or is this about the semiotics of the difference between "un-" and "non-"?
David Rosenthal: "The distinction between being consciously aware of something and being aware of it but not consciously is simply a useful way… to capture the difference between conscious and subliminal perceiving."
May I suggests a simpler and more useful way? Subliminal "perceiving" is neither awareness nor perceiving: It's detection, processing and responding (which can also be performed by a robot). Perceiving is done awarely (felt); subliminal detection, processing and responding are not.
I think the terminology isn't so important here; I think what matters is whether, when stimuli are presented subliminally and they affect an individual ppsychologically, the psychological effect is similar in relevant ways to the psychological effect when the perceiving is conscious. I've argued--and there is overwhelming evidence for this--that the answer is yes. Given that, we have overwhelming reason to regard the subliminal case as perceiving. And then there is no issue about terminology excpet how to describe that result.Delete
Gluing 'felt' to perceiving is an optional terminological decision that distorts things, by discounting all the psychological similarities that obtain between that conscious and nonconscious cases. We can do greater justice to the situation by describing ourselves as being aware of the stimulus in the subliminal case--though not consciously aware, and being consciously aware of the stimulus when the perceiving itself is conscious.
In any case, we can't settle substantive questions about whether the subliminal case is relevantly similar to the conscious case by terminological fiat; we must ask how similar the two cases are psychologically.
The weasel-word here is "psychological":
(Today's) robots and teapots (and Zombies), I assume we agree, do not have psychological states. They just have states.
The next question is: do all cerebral states of entities that are able to feel -- whether they are felt states or not -- count as "psychological" (or "mental") states?
Perhaps that's just a terminological issue. Ditto for cerebral states that affect other cerebral states. Whether we call the cause-state or the effect-state "psychological" is up to us. A fortiori, if a cerebral state is unfelt, but it resembles a cerebral state that is felt, and is even a precursor cause-state or an influence on a later cerebral state that is felt, then we are free to call it "psychological" (or "mental") if so inclined.
But calling the precursor state "aware" when it's not felt, or not yet felt, is another story -- and it goes beyond the question of the arbitrariness of how we choose to use the word "mental" or "psychological" and approaches something closer to either an equivocation or a self-contradiction.
That's why I urge doing away with all the ambiguous words and weasel words and just call a spade a space: `
What we are talking about when we refer to conscious states, states of awareness, or mindful states is felt states. If a state does not feel like anything to be in, it is not a state in which anyone is aware of anything.
(To detect or possess or process or respond to information [data] is not to be aware of the data, unless the detection or possession or processing or responding to the data is felt.)
SUMMARY: What states we call "psychological" and "mental" is terminological, and a matter of taste. What states we call "aware" is not.
I don't think 'psychological' is what Stevan calls a weasel world. There is a lot of science about psychological functioning, and a very great deal known about the states, some conscious but by no means all, that figure in psychological processes and functioning--i.e., the processes and functioning that constitute the distinctive subject matter of psychology.Delete
I don't suppose it's unimaginable that teapots could be in states that figure in functioning and processes of the distinctive type studied by psychology; imagine animated teapots in a Disney film. But we expect that no real teapots are in states we would characterize as psychological because they aren't of the sort figure in the processes and functioning that psychology would study.
It simply does not cut nature at the joints to divide conscious states from the nonconscious states that figure in those processes and functioning; the appeal to what's felt may be of concern to some in philosophy, but does not do justice to the functioning of psychological beings.
But David, I'm not a philosopher.
And I'm not appealing to what's felt -- I'm just appealing for an explanation of how and why organisms feel, rather than just do.
Psychologism is no reply.
RETWEETED 10:16 AM - 6 Jul 12 by Xavier Dery @XavierDeryReplyDelete
Dan Lurie @dantekgeek
David Rosenthal: Appeal to utility is common in biology & neuroscience, but not always helpful at the psychological level. #TuringC #AASC16
10:14 AM - 6 Jul 12 via Twitter for iPad
Xavier Dery @XavierDeryReplyDelete
Rosenthal: the lag between occurence of mental phenomena and consciousness of these phenomena is not limited to volition... #TuringC
10:34 AM - 6 Jul 12 via Twicca Twitter app
Xavier Dery @XavierDeryReplyDelete
Rosenthal: "the Hard Problem of consciousness only seems hard, the real problem is we have no real theory of the brain" seems legit #TuringC
10:40 AM - 6 Jul 12 via Twicca Twitter app
Unconscious qualitative character? Who's enjoying the quality? And in what does the quality consist, if it is not a feeling?
What does the quality consist in if it is not conscious? It's the mental property in virtue of which--in both conscious and subliminal cases--we distinguish perceptually among perceptible properties. See, e.g., my "How to Think about Mental Qualities," at https://wfs.gc.cuny.edu/DRosenthal/www/DR-Think-MQs.pdf. Who's enjoying the quality? Well, in the subliminal case, nobody is enjoying it consciously. But again, we can't settle substantive question about nonconscious qualities by stipulative pronouncements that mental qualities occur only when consciously enjoyed. (Is there nonconscious enjoyment? Of course; a nonconscious sensation could, though the individual is not aware of it, result in pleasure; the pleasure might be conscious and it might not be, but in either case it would affect the individual's psychological life in the way pleasure characteristically does.
I'm afraid I'm by now lost in this maze of mind-like alter-egos inside me, experiencing pleasures while I am deprived of them.
More parsimonious (and comprehensible) to assume that the only one in me that is capable of feeling pleasure (or feeling anything at all) is me, and that the unfelt goings-on inside my head are just goings-on, like the pace-maker that keeps my heart beating (or the teapot boiling) -- not feelings minus the feeler.
(But let's be clear: all states -- both felt and unfelt -- that are going on inside the head of the feeler have unfelt causes. So if I do feel pleasure, its neural causes are unfelt [otherwise I could read them off from my armchair and save neurobiologists a lot of hard work discovering what they are!]. But unless a pleasure is actually being felt, there simply is no pleasure going on at all, period. At least not inside my head (assuming I am not suffering from multiple personality).That's not terminological. It's substantive, indeed logical. I have no idea what an unfelt "pleasure" could possibly mean. I certainly would not call the neural causes, now, of what might eventually generate a feeling of pleasure in an hour "unconscious pleasure", now, no matter how much they may resemble the neural state actually going on while pleasure is actually being felt.)
Nonconscious pleasure is simply any state that has the characteristic psychological effects of conscious pleasure apart from the individuals' being conscious of it.ReplyDelete
Saying something is a logical matter is in effect saying it's terminological: There's nothing in logic, properly so called, to tells one way or another here.
David, hand on heart, no matter how hard I try, I can't help feeling that a pleasure I never felt is a pleasure I never had, irrespective of any other "characteristic psychological effects" -- except if the effect is to make me feel, today, as if I had felt a pleasure, yesterday, in which case it's still not a pleasure I ever had or felt, it's just a false memory. (Nostalgia's sometimes like that…)
And yes, this still strikes me as a logical matter (but perhaps it's an "analytic a posteriori" (if one goes in for such Kantian Koans...)
I don't how to respond, Stevan. When you speak of the analytic a posteriori, I don't understand that unless you mean a conceptual truth. I don't think it is one; people do have pleasurable states that aren't conscious.ReplyDelete
You say you don't understand what I'm talking about; you don't understand it *from a first-person point of view*. But I'm saying that such states occur without always being accessed in a first-person way.
A third person's pleasure sounds to me like someone else's pleasure, not mine, David. (And as far as I know, there are no other pleasure-seekers in my head but me!)
Let's forget the Kantian Koans (a bad joke): "Unfelt pleasures" only makes sense to me as pleasures un-had -- hence more like missed appointments than appointments met by some 3rd party...
It's your pleasure, Stevan--nobody else's! Sorry you're not always aware of it, but it's yours nonetheless!Delete
Ah, wouldn't it be nice, then, if pain, too, were something one could "have" without feeling it...
(But, while we're on the subject: why does anything at all need to be felt?)
I did have a lot of arguments in my talk that a mental state's being conscious *adds* no utility over the state's simply occurring nonconsciously!ReplyDelete
See "Consciousness and Its Function" Neuropsychologia, 46, 3 (2008): 829-840, and
"Higher-Order Awareness, Misrepresentation, and Function", Philosophical Transactions of the Royal Society B: Biological Sciences, special issue on Metacognition, 367, 1594 (May 19, 2012): 1424-1438, §IV.
And pains do occur without being conscious--and that's partly nice, but they have all the other noxious psychological effects.
NOXIOUS CAUSES AND EFFECTSDelete
"Noxious psychological effects" sounds like something you feel: That's normally what we call pain. Its antecedents may be the causes of the pain, but they're not the pain (not if they happened yesterday).
I'm not talking about antecedents of consciously felt pain; I'm talking about state that have all the causal links to other psychological states and to inputs and outputs--except for not being conscious.ReplyDelete
You keep taking my words--'noxious psychological effects'--to mean what you want them to; but that's not an argument.
UNFELT INJURIES VS. UNFELT PAINSReplyDelete
So I guess you did not mean that unfelt pains "have all the other noxious psychological effects" of pain, but rather that they have all the other negative psychological effects of pain.
What would those negative effects be? May I ask about a particular pain: The pain of a burn. Let's consider both cases:
Mine, the usual one: My right arm is burnt. I feel the pain of the burn. (The burn itself is not a pain, it's a tissue injury) and as a (negative) psychological effect I can't go to the party and I have to write with my left hand for a while.
Yours, the unusual one: My right arm is burnt. The pain occurs, but it is not felt by me. But I do eventually notice that my arm has a burn on it, so, as a (negative) psychological effect, I can't go to the party and I have to write with my left hand for a while.
Is that what you mean by my "having" a pain, along with all its other negative psychological effects, but without feeling the pain?
Because I would describe that as having an unfelt injury, along with all its other negative psychological consequences; not an unfelt pain.
I did say, and mean, that "pains do occur without being conscious--and that's partly nice, but they have all the other noxious psychological effects." N.B. 'other'. Not just negative; all the properties that don't consist in or depend on their being conscious.
I don't think the case you think is fair to what I'm talking about.
But why don't we agree to wrap this up? Let me let you have the last word, since you were the magnificent host at the magnificent 10-11-day Montréal event!
"UNFELT PAIN = NOCICEPTIVE DOINGS, NOT PAIN"ReplyDelete
Pity to wrap it up. That there's "unfelt pain" is certainly not a done deal! Lots more to be said and thought about. I suspect we're just dealing with done doings, when you refer to "other" effects.
There would be no hard problem if unfelt doings were all there was. Why some doings are felt is the problem the Summer Institute was all about.
Thanks for a stimulating presentation and discussion, David. I think we both agree that it looks as if anything that we can do consciously could be done unconsciously. What's not explained yet is how and why, then, any of it is done consciously.
One last round, since I think your last post raises a crucial issue. I think the issue about whether what you want to call nonconscious nociceptive doings should be called, as I think, nonconscious pains may not be as interesting as what the difference is between those states and conscious pains. As you know, I think a higher-order theory of consciousness can explain the difference. No state is conscious if the individual is wholly unaware of being in that state; so, by contraposition, some type of higher-order awareness is what makes the difference between a state's being conscious and its not being conscious. And that's independent of what terminology one wants to apply to the nonconscious states.
I completely agree that the profound difference is the difference between unfelt doings and felt doings.
But I can't help repeating that "aware" is a weasel-word (and "wholly" is a bit of a fudge too!): A system that detects an optical input, and acts on it, is not "aware" of anything. It is just acting (doing).
It is only aware of an optical input if it feels like something to detect the optical input.
I think the "higher-order" hierarchy is just hermeneutical. Either the system feels or it doesn't. And if it feels, some of its functions are felt and some of them aren't. The unfelt ones are eo ipso unaware and unconscious doings. And the ones that are felt are felt, whether they are 0-order or Nth-order (feeling that you feel that you feel that you feel...).
But remove the feeling, and what you have left is nothing but doings.