Wednesday, 4 July 2012

Bernard Baars: Psycho-Biological Risks/Benefits of Consciousness

    Abstract: Some philosophers maintain that consciousness as subjective experience has no biological function. However, conscious brain events seem very different from unconscious ones. For example, the cortex and thalamus support the reportable qualitative contents of consciousness. Subcortical structures like the cerebellum do not. Likewise, attended sensory stimuli are typically reportable as conscious, while accurate memory traces of the same stimuli are not reportable, unless they are specifically recalled.  Like other major adaptations, conscious and unconscious brain events have distinctive biological pros and cons. These involve information processing efficiency, metabolic costs, and behavioral pros and cons. The well-known momentary limited capacity of conscious contents is an example of an information processing cost, while the very large and energy-hungry corticothalamic system makes costly metabolic demands. Limited capacity can cause death and injury in humans and other animals, as in the case of traffic accidents and predation by ambush. Sleep is a state of high vulnerability among prey animals. We can begin to sketch out some of the biological costs and benefits of conscious states and their stream of reportable contents.

    Baars, B.J. & Gage, N.M. (2011) Fundamentals of Cognitive Neuroscience: A Beginner's Guide. Elsevier/Academic Press. (See Chapter 8 for the brain basis of consciousness and attention.)
    Baars, B.J. (2012) The biological costs of consciousness. Nature Precedings.
    Edelman, G.M., J. Gally & B.J. Baars (2011) Biology of consciousness. Frontiers in Psychology. January, Vol. 2.
    Franklin, S., S. D� Mello, B. J. Baars & U. Ramamurthy (2011) Evolutionary Pressures for Perceptual Stability and Self as Guides to Machine Consciousness. Int Jnl Machine Consciousness.

Commentary invited


  1. My title has changed, although it is still in the same domain.

    It is now

    "The biological basis of conscious experiences: Global workspace dynamics in the brain."

    Things are coming together nicely.

    1. On the question of the limited capacity of consciousness, I wonder if what you are really talking about is the limited capacity of *perception*. If the global content of consciousness in our brain is our occurrent phenomenal world (as I think it is) then its contents must be every thing that is represented in our phenomenal surround *prior* to the detection of any *particular* objects or events by selective attention; i.e., acts of *perception*. If this is the case, then consciousness can be capacious whereas perception would be limited by the constraint of selective attention. What are your thoughts about this?

    2. Hi Arnold,

      It doesn't matter what we call things, of course, as long as the theoretical terms are clearly tied to observable operations, and as long as they are rigorously defined and mutually consistent.
      That being said, we obviously try to use terms that are not far removed from everyday usage, so we can get across more easily.
      I think the evidence for BOTH conscious AND unconscious perception (i.e., stimulus representation in the brain) is now very substantial.
      I realize that in traditional psychology (going back to Aristotle, who pretty much defined our scholarly usage of psychological terms) the term "conscious perception" sounds redundant, and "unconscious perception" sounds paradoxical.
      That's why Helmholtz got into hot water with "unconscious inference" or "unconscious conclusions." The phenomena he talked about were mainly perceptual, I believe, or triggered by perceptual events.
      A turning point for me came with the work of Nikos Logothetis (et al) in the 1980s, who tracked stimulus-sensitive neurons in visual cortex using single cell recording (but lots of single cells in each visual area of cortex), in the macaque, whose visual brain is probably the best animal model for the human visual brain. Logothetis carefully recorded in V1, and I think MT (motion), and finally IT (called TE in the macaque), and one or two other places.
      The neat thing is that he used binocular rivalry to simultaneously present visual input that the monkey could identify AND optically identical stimuli that the monkey could NOT identify at any given moment. That was hard to do, but we now know of visual tricks that make the rivalry last longer.
      To trigger off neurons in V1, you use rivalling points of light that cannot be fused into a single dot (I think that's what it was. It could be spatial grids or something similar). To trigger neurons in MT you can use two escalators moving in different directions, which is really easy to do. (I have a little video off the web that shows it with normal stereoscopic vision). To set off neurons in IT (object recognition) you use two visual objects that are different, and therefore cannot be fused.
      The monkey responds after a lot of training using "match to sample." For a coffee cup stimulus it points to a different coffee cup on a screen, etc.
      So you know what's reportable (conscious) and what's not. (Randolph Blake has done a lot of collaborative work with Logothetis, by the way).
      The summary results are:
      a. In early visual cortex (before IT), equal numbers of single neurons are firing for both the reportable and non-reportable stimulus. Around 20% I think.
      b. In IT, object identification, 90% of the neurons are voting for the conscious input, and no neurons responding to the non-reportable, non-conscious stimulus could be found.
      A lot of replication has occurred, mostly indirect, using other brain imaging methods, but that story seems to hold up. Right now that kind of stuff has been done in human epileptics, where it's ethically permissible to do it during exploratory implants prior to surgery. The names on PubMed include Canolty et al, Cerf M et al, Crone et al, and Fried I et al. Koch has also collaborated with others on those studies, which are quite amazing.
      The conclusion, supported by a ton of subliminal studies, is that there are indeed unconscious stimulus representations quite far into cortex. In the case of subliminal snake pictures, I think, and scary faces, there seems to be unconscious emotional pattern recognition for the amygdala, and I would think the fusiform face area.

      That means we have a 2x2x2.

      Conscious vs. unconscious (as assessed by voluntary reportability, checked for accuracy).

      Stimulus-driven (i.e., perceptual) vs. endogenous (visual imagery, for example).

    3. Selectively attended vs. non-attended (using distraction or some kind of momentary overload). Posner's flanking task is a good example of that, where you get attentional shifting without eye movements.
      It looks like conscious events happen more frequently with voluntary selective attention. But we also have bottom-up selective attention, as in the case of bright flashes, or the CS in Pavlovian conditioning. So attention is NOT equal to consciousness if you accept those operational definitions.
      The most attractive definition of selective attention, I believe, is 'whatever enables access to conscious (reportable) experiences."
      So those three variables can be teased apart orthogonally.


    All the selective, focusing and integrative functions of the Global Workspace sound very useful to ad active, cognizing organism: But how and why are they felt, rather than just done?

    1. Taking into account the Limited capacity paradox, the Global Workspace needs to be discriminated from the rest of the cognitive system. What little there is (1 to 4 elements) in the GW needs to be highlighted from the rest of the sensory inputs or even the cognitive bagage in case of high level refelexion.

      In regards to sensory inputs, the crucial info has got to be selected by the organism and drawn to the subject's attention. If most of the sensory input is unconscious, then we need to have the important inputs highlighted. Consciousness might just be the way to do so. Otherwise, we would need another cognitive system to do that precise job. It just might happen to be consciousness.

      If we talk about higher level cognition, which is constituted by an enormous background of knowledge, experience and belief, consciousness just might play the same role.
      As an example, the mind drifting off in countless directions just before you fall asleep is a semi-conscious reflexion. Fall asleep completely and you will never remember any of it. Wake up and you will have some vague idea of the train of thought you followed. Most probably, that train of thought will make no sense whatsoever, just being an association process.

      Then maybe the mind needs consciousness as a guideline for action. Maybe thought-processing isn't possible to work by itself because the reasons we think are beyond our genetic programming.

      For example: wasps that feed on beetle larvaes know exactly where to strike their darts in order to paralyze the motor nervous centers to protect the fragile pup when it hatches. This precise surgical knowledge is neither learned nor reflexive since it is gene encoded. But besides base reflexes like suckling and grasping in infants, human beings have outgrown the mere genetical encoding's capacity. Evolution has made found another way to get us to survive, and that would be consciousness.

    2. Yes, I agree, Mr. Pelletier.

      I would say that without the dismissive adjectives like "just," etc. It's a wonderful problem, and you don't want to take all the fun out of it. For one thing, there could be more ponies hidden in those woods...

      In addition, I haven't dealt with all the aspects of the problem in this one brief presentation. Take a look at the 1988 book if you want to see other insufficiently covered aspects of this particular Mount Everest.

      Good luck!



    I think the key actually lies in the interaction of the observing executive ego of the prefrontal cortex (with links to parietal egocentric maps, for example) with allocentric (other-attributed) sensory input. Conscious objects of experience, like coffee cups in peripersonal space require the interaction of those two systems. In my 1988 book (which everyone should have memorized by now) I argue that observing ego functions are coextensive with contextual frames for qualitative experiences, and that the actual conscious experience of red objects involves reduction of degrees of freedom within the contextual color system as well as the egocentric/allocentric spatial maps.

    What makes subjective qualia different from mere conjunctions of features is the interaction of the extended ego-frame system (the context hierarchy of my 1988 book) with unpredictable input. That's why you need the information reduction, a kind of Piagetian accomodation along the dimensions of subjective experience (e.g., psychophysical dimensions). In a sense all input that is experienced as subjective shakes up the entire dominant context hierarchy.

    I have not developed this idea beyond the 1988 chapters, but it should be fairly straightforward to do so. It was not particularly enticing to do that as long as nobody understood the other parts of the theory.

    Maybe the time is ripe to do that. Right now I'm finishing one long-term paper, and it could be time to do another, if I can figure out an appropriate venue.

    The seeds for this approach were laid in the psychophysical tradition, by the way. That where we can also find a lot of the evidence and possible predictions.

    1. I share Doctor Harnad’s concern concerning felt experience itself.
      IMHO, the account given by Global workspace theory is one of subjective experience and of information integration. I am deeply convinced that there is something fundamentally right about this approach to consciousness.
      However, I am still curious as to what mechanism in the neuronal network could possibly yield the felt, qualitative aspects of the representations having reached global access—not the subjective aspect per se, as first-person perspective, but the actual felt quality of the representations. This remains unexplained.

    2. OK --- but then it's vitally important to operationalize the notion of "felt sense" in a way we can test empirically. Since we're talking about a huge biological adaptation, with a 200 million year history (the history of mammals, which have the right neural machinery), it is not just floating in a little cloud above our heads...


    3. Thank you for responding, Doctor Baars.

      I have been thinking about the problem of operationalizing felt sensation a lot over the past few weeks. It seems to me that the “hard” problem is hard precisely because we have not developed a framework in which we can address the felt aspects of conscious representation. IMHO, the major problem to be solved is that we do not yet have a specific methodology able to characterize with any degree of objectivity and precision which sensations are actually felt.

      Suppose I wish to refer to a specific red hue, say the red of a flower petal, in my visual field. The problem, I would argue, is that there is no way to verify whether we have the same felt experience of the flower petal. Indeed, pointing to a specific portion of the petal is of no help because, given the contingencies of perceptual experience (slight differences in light reflection, spatial position, and even biological makeup of the perceiver), it very highly likely that the felt aspect of our conscious representation of the flower petal should vary ever so slightly. Even if we look at the same petal, we cannot verify that we have the same qualitative representation.
      In other words, when I refer to an objective stimulus, there is no problem of reference because the same object is available in our overlapping experiences (i.e., we both see a flower petal). Most scientific theorizing is made from just this point of view: we use our representations to refer to the world. The problem, however, begins when we attempt to speak of our *representations* per se, which cannot overlap in the same way. We still have no systematic way of verifying whether we are feeling the same shade of red, hearing the same pitch, etc.

      How does one refer to qualitative aspects of conscious representations? Can one actually refer to qualia? Perhaps this is the key to solving the “hard” problem.
      I have begun to think that perhaps the way out of this impasse is to study subjects with obviously different felt experiences of the same object; for instance, it is conceivable that comparative studies of patients with tri- and dichromatic vision could be of some help, or again of normal subjects and others with blindsight.

    4. The hard problem is turned into an insoluble problem by the mistaken notion that feeling must be something that is *added* to an essential brain process -- the activity of a particular kind of brain mechanism. So the objection is repeated "But the *doing* of the brain mechanism does not explain its *feeling*!" If we adopt a monistic stance, then the processes -- the doings -- of the conscious biophysical brain must *constitute* feelings, and nothing has to be added to these essential brain processes.

      I have argued that we are conscious only if we have an experience of *something somewhere* in perspectival relation to our self. The minimal state of consciousness/feeling is a sense of being at the center of a volumetric surround. This is our minimal phenomenal world that can be "filled up" by all kinds of other feelings. These consist of our perceptions and other cognitive content such as your emotional reaction in response to reading this comment.

      On the basis of this view of consciousness, I proposed the following working definition of consciousness:

      *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

      The scientific problem then is to specify a system of brain mechanisms that can realize this kind of egocentric representation. It is clear that it must be some kind of global workspace, but a global workspace, as such, is not conscious -- think of a Google server center. What is needed is *subjectivity*, a fixed locus of spatiotemporal perspectival origin within a surrounding plenum . I call this the *core self* within a person's phenomenal world. A brain mechanism that can satisfy this constraint would satisfy the minimal condition for being conscious. I have argued that the neuronal structure and dynamics of a detailed theoretical brain model that I named the *retinoid system* can do the job, and I have presented a large body of clinical and psychophysical evidence that lends credence to the retinoid model of consciousness.

    5. I agree wholeheartedly with you; I too think that the “doing”—“feeling” dichotomy is a false one, in that necessarily, from a monistic point of view—and we’re all monists here, aren’t we?— feeling must necessarily be something the nervous system does, and hence feeling is a kind of doing. I would even go so far as to say, following the late F. J. Varela, that “living is sense-making”. Up to here we’re in agreement.

      However, I disagree that feeling can be reduced to a privileged egocentric perspective. IMHO, such a position conflates subjectivity for phenomenality. Correct me if I’m wrong, Doctor Trehub, but you seem to be implying that if we can determine the mechanism responsible for integrating information into a first-person perspective, using for instance a global workspace architecture, then the problem of phenomenal experience itself dissolves. Once we have specified the processes that generate a minimal ego-space, all we would need to do is populate the latter with phenomenal objects.

      I would argue that things are not so simple. Indeed, while models like Baars’ and Merker’s do an excellent job explaining this first-person vantage point, they do not yield felt qualities per se, at least not in an explicit way. Consider Merker’s example of the vehicle equipped with a camera and imaging software. The vehicle in question is able to generate a kind of minimal ego-space; yet, I would venture that it does not yet feel any of the things that populate its phenomenal world. Indeed, the claim that all I need to do is “fill up,” as it were, my egocentric perspective with objects (such as my perception of the text on this screen, or my feeling of great interest upon reading your comments) seems, at least to me, to be yet another spin on the “extra ingredient” solution. The only difference is that the extra ingredients you are proposing get their phenomenality from an equally mysterious property of the ego-space, which is to generate feeling, for some unexplained reason.

      I am extremely sympathetic to ego-space oriented views of phenomenal experience, but they do not explain away phenomenality, IMHO.

    6. Maxwell J. Ramstead: "Indeed, while models like Baars’ and Merker’s do an excellent job explaining this first-person vantage point, they do not yield felt qualities per se, at least not in an explicit way."

      I must say that Baar's and Merker's models do NOT *explain* the first-person vantage point. They *posit* a first-person vantage point, but they do not specify the neuronal structure and dynamics of a brain mechanism that can realize a first-person vantage point. Also, a vehicle equipped with with a camera and imaging software does NOT generate a minimal ego space because it has no internal analog representation of the volumetric space in which it exists. To my knowledge, my detailed model of the *retinoid system* is presently the only model that *explains* subjectivity/1st-person perspective. Moreover, the SMTT experiments that I cited actually demonstrate that a vivid conscious experience, without a matching stimulus, can be systematically generated and shaped by the properties of the brain's putative retinoid mechanism.

      What do you think has to be added to the biological structure and dynamics of the retinoid system to give us our phenomenal world?

    7. Thank you for getting back to me so fast, Doctor Trehub! I will refrain from further speculation until I have examined your model, which looks quite interesting indeed. I have downloaded a copy of chapter 16, and I will get back to you.
      Can you suggest any more material? I’d be glad to take a look.

    8. Glad to oblige. Here are three recent publications that you might look at:

    9. Hi Bernie,

      You wrote: "I argue that observing ego functions are coextensive with contextual frames for qualitative experiences ..."

      This has puzzled me for a long time. I think of the *ego* as the core self (I!), a cluster of neurons that constitute the perspectival locus of spatio-temporal origin within our phenomenal space (retinoid space in my theoretical model of consciousness). It seems to me that the long-held notion of the ego/self as an *observer* has been a serious stumbling block in our understanding of consciousness. In detailing the plausible neuronal mechanisms that might generate our conscious content, I found that the ego/core self could not have the biological machinery needed to be an observer and, at the same time, function as the fixed perspectival origin of all conscious experience. The only way that my theoretical model of the cognitive brain could work effectively was to have observing mechanisms in the synaptic matrices among all the pre-conscious sensory modalities. So *observation* could not be an ego function. The role of the retinoid system is to *bind* the various pre-conscious sensory observations/features (as patterns of recurrent axonal projection) in proper spatio-temporal register within retinoid space, our phenomenal world. In this process, selective excursions of excitation over retinoid space that are induced by the core self/ego (heuristic self-loci) play a critical role. But this does not entail an observing ego. For more about this, see:


      This way of thinking about the self/ego in relation to observation has enabled the explanation of many previously inexplicable conscious phenomena and the successful prediction of new experimental findings. I would be greatly interested in your thoughts about this formulation of *observation* in the retinoid theory of consciousness, Bernie.

    10. HARD PROBLEMS NEED SUBSTANTIVE SOLUTIONS (Reply to multiple commentaries)

      The "hard" problem is not a metaphysical one, and declaring oneself to be a card-caring monist does not solve it. Nor does "operationalizing" the measurement of feeling. (That's the other-minds problem, and the Turing Test -- T2, T3 or T4 -- is the best we'll ever get.)

      Nor does one's monist-card do away with the doing/feeling dichotomy: Yes, the brain must cause feeling, somehow, for some adaptive or functional reason. And causing is doing. But the trouble is that we don't know how, and we don't know for what adaptive of functional reason. And explaining that is the hard problem. It cannot be hand-waved away by blurring distinctions or invoking monism or telling Just-So stories...

    11. *Some stories help us explain/understand consciousness/feeling. Other stories obfuscate our understanding of consciousness/feeling.*

      Stevan, what is the difference between a "Just-So story" and a scientific theory?


      A scientific theory gives a testable causal explanation of the evidence.

      A Just-So Story just gives a causal interpretation (hermeneutics), viz:

      "Why do plants grow toward the sun?" Because of a phototropic force.

      "Why do organisms feel?" Because of activity in this neural system...


      Stevan: "A scientific theory gives a testable causal explanation of the evidence."

      Consider the following:

      1. I am conscious if and only if I have a *sense of being here with something all around me even though the particulars are constantly changing*. Call this the minimal conscious content (MCC).

      2. MCC must be the product of an active brain. Given this stipulation, I have proposed, as a working definition, that consciousness (MCC) is a transparent brain representation of the world (the space that is all around me) from a privileged egocentric perspective (me here).


      1. What system of mechanisms in the brain has the competence to cause MCC? I have proposed that the human brain has a system of neuronal brain mechanisms with the structure and dynamics that can represent a global volumetric spatiotopic analog of the world space we live in, including a fixed locus of perspectival origin that I call the *core self* (I!). This part of the retinoid system is called RETINOID SPACE. I have specified the minimal structure and dynamics of the brain system that regulates the content of retinoid space and call it the RETINOID SYSTEM.

      2. Why should we think that the retinoid system is a competent causal model of MCC? It seems clear that any competent model should be able to make relevant predictions that can be tested and are empirically validated. One thing we should NOT expect is that a competent causal model must be able to exhibit ALL the properties of MCC. (I think this unwarranted expectation plays a part in the "explanatory gap" notion in consciousness studies.) What we should expect is that the candidate model of MCC be able to generate matching *analogs* of relevant properties of the phenomena.

      3. In a wide range of empirical tests, the operating characteristics of the retinoid model successfully predicted/explained previously inexplicable conscious phenomena/feelings, and also successfully predicted novel conscious phenomena. Among many examples are hemi-spatial neglect, seeing-more-than-is-there (SMTT), Julesz random-dot stereograms, the pendulum illusion, 3D experience from 2D perspective drawings, the moon illusion, the Pulfrich effect, etc.


      Organisms without consciousness/feeling do not have an internal global representation of objects and events in the world they live in, and can only respond to the immediate exigencies of their environment with reflexive adaptation. Conscious creatures, on the other hand, do have internal representations of the world they live in and gain an evolutionary advantage by being cognizant of the objects and events in the world with affordances for their survival and flourishing. In humans, consciousness also enables the imaginative and practical reconstruction of the world we live in.

    14. Why do internal representations have to be felt, rather than just representing?

    15. Stevan: "Why do internal representations have to be felt, rather than just representing?"

      There is only one kind of internal representation that is FELT; it is any representation that is located in retinoid space, which is the phenomenal world in our extended present. All other internal representations are NOT felt. Each of our different sensory modalities may contain representations in its synaptic matrices, but these distinct representations remain PRE-CONSCIOUS/UNFELT until they are projected into retinoid space, via recurrent axonal excitation, and bound in proper spatio-temporal register where they become SOMETHING SOMEWHERE in our phenomenal world in perspectival relation to our core self (I!). This is the SUBJECTIVE DOING of the retinoid system that CONSTITUTES feeling/phenomenal experience.

      What are the counter arguments?

      As an aside, I want to express my thanks to you, Stevan, for providing this sky-writing platform where we can actively engage in detailed discussion about consciousness, the most significant and vexing problem in science. This is the kind of back-and-forth that is needed.

    16. Arnold: Why and how is "any representation that is located in retinoid space" felt? I assume this is the activity of some neural system, actual or theoretical. But it is not enough to just say it is so: Why is retinoid activity felt? How does retinoid activity generate feeling?

      One cannot just posit that a neural system feels, and then ask for counter-arguments. One has to explain how and why it feels, rather than just does whatever it does, unfeelingly.

    17. Stevan: "One cannot just posit that a neural system feels, and then ask for counter-arguments. One has to explain how and why it feels, rather than just does whatever it does, unfeelingly."

      I did explain how and why one feels in my post above of 23 July 2012. Apparently you are still puzzled. Let me approach the problem from a different angle. If we are to explain how and why we feel, we must offer an overt description of what it is like to have any kind of feeling.


      This is your primitive phenomenal world. But with no sensory transducers to detect the space around you, how can you feel that you are in a space? This is the astonishing aspect of feeling that is the key to understanding how and why we feel! The neuronal structure and dynamics of retinoid space provides an innate brain representation of the space we live in, and it is organized around a fixed locus of spatio-temporal perspectival origin which is our self-locus -- our core self (I!). This special kind of neuronal brain mechanism CONSTITUTES *subjectivity*, which is the fundament of all feeling/consciousness. So autaptic-cell activity in retinoid space CAUSES FEELING and NOTHING ADDITIONAL HAS TO BE ACCOUNTED FOR AS A GENERATOR OF FEELING. The WHY question is also answered in my 23 July post. You should note that all of this is not mere speculation as there is a very large body of empirical findings that support the retinoid theory of subjectivity/consciousness/feeling. The bottom line is that the retinoid system CANNOT DO WHAT IT DOES UNFEELINGLY!

      Stevan, if you still believe that some additional kind of brain activity is needed to account for subjectivity/feeling, then please tell us what you think it is and how its properties might be empirically tested.

    18. Arnold, this is unfortunately getting too repetitious. Neither of us is providing any new information. What you are describing is a system that can behave adaptively in space. It needs sensors to detect, information processing, some dynamics, and effectors to respond. Even today's simple robots do some of that.

      You haven't given a hint of a hint about why any of that should be felt, whether it happens inside a robot, or inside an organism with a brain.

      Now I will let you have the last word, but I will no longer respond unless something new and substantive comes up.

    19. Stevan: "What you are describing is a system that can behave adaptively in space. It needs sensors to detect, information processing, some dynamics, and effectors to respond. Even today's simple robots do some of that."

      You have missed the most relevant aspect of the retinoid model. Retinoid space is a volumetric analog of the space that exists around you, and it includes a fixed locus of perspectival spatio-topic origin -- the self-locus (I!) in the retinoid theory of consciousness/feeling. Also, retinoid space does NOT need sensors to give you a sense of being *here* in a surround. Sensory projections add phenomenal content/feelings to the subjective primitive of feeling as the origin of all experience. I should add that there is no known artifact, robot or computer, that contains an analog representation of the volumetric space in which it exists containing a representation of any of its parts.

      Before we close our exchanges, Stevan, would you tell us what kind of scientific theory and evidence would, in your opinion, count as an explanation of how and why we feel.

    20. I must say that I identify with Dr. Treehub's sentiments here. I admit that I have not critically evaluated the all of the empirical evidence Dr. Treehub has cited in support of his theory of retinoid space, but the conceptual framework he has described does attempt to address how feeling is generated and represented in the brain. In any case, the empirical support does not seem to be addressed in the above exchange, and Dr. Harnad dismisses the model on a merely conceptual basis ("Why do internal representations have to be felt, rather than just representing?").

      To address Dr. Treehub's points at their fundamental level, he believes we form perspectival and subjectively-unique representations of the world and ourselves in a network of neurons that he calls "retinoid space"; this is the basis of subjectivity of feeling. (Whether this is a single locus or a distributed network of neurons is conceptually irrelevant, though this would need to be addressed in a complete explanation of how we feel.)

      This type of subjectively-contextual neuronal network can operate independently or in concert with other brain systems (e.g., the motor control of respiratory muscles can occur independent of the retinoid space during autonomous breathing, or bound to the activity of neurons which compose the retinoid space). The binding of the retinoid space (or whatever you'd like to call the neuronal basis of subjective feeling) with myriad brain functions (other doings) could occur through mechanisms described by the global workspace theory of Dr. Baars, and/or the supramodular interaction theory of Dr. Morsella, etc.

      Before dismissing this by asking why we need subjective feeling in such a system as described above, rather than just autonomous doing, I'd appreciate your opinion on this: could such a system exist without producing feeling out of these doings?


      @Roberto, what I am rejecting on a "merely" conceptual basis, you are accepting on a merely conceptual basis!

      And in doing so I am afraid you are missing the point, which is certainly a conceptual one:

      "Perspectival representations" are fine; you can have them in a robot (even "volumetric" ones, pace @Arnold!). But why should a representation be felt just because it's perspectival or volumetric?

      I know it's discouraging to have every hopeful starter rejected, but the problem is not called "hard" for nothing. You can bet that if there is a solution, it's not going to be an easy (and question-begging) one like "perspectival representations."

      What makes simple solutions look like they work is almost always hermeneutics -- which means interpreting something in terms of something else:

      We know what it feels like to feel. Feeling is perspectival. Feelers have a "point of view."

      (Add a few redundant mentalistic adjectives: a subjective, "1st-person" point of view. For good measure, call it a "conscious experience", of which you are "aware." Add that it is "aspectual" and has a "phenomenal character": an incantation of "qualia" sometimes helps too.)

      And when you've done all that, add that you have a perspectival model -- and presto, you have a solution to the hard problem.

      The naive little niggler that always gets forgotten, though, is:

      "Yes, I can interpret your model's properties as if they were felt properties, but you forgot to tell me how and why they were felt: Because otherwise they are simply encoded properties, and enacted properties, as in a robot (in other words, doings). They may have all the objective features you attribute to them -- doings, all -- but how and why are they felt?"

      Mentalistic hermeneutics creates a hall of mirrors in which you read off exactly what you have projected into it, forgetting that the source is you.

      (By the way, Arnold Trehub's "perspectival/volumetric" hermeneutics are very similar to Bjorn Merker's "ego-centre" hermeneutics. And with his "global workspace," Bernie Baars is -- to pinch a quip from Bradley on metaphysics -- "a brother hermeneutician with a rival interpretation". Ditto for Shimon Edelman's "temporal integration" hermeneutics and Antonio Damasio's homeostatic hermeneutics, and for that matter, it would be true of Dan Dennett's "heterophenomenology" too, right down to the last JND -- if it weren't for the fact that Dan is actually denying the existence of feelings altogether, as not being anything but the doings of heterophenomenology...)

    22. With all due respect, you've moved too quickly in saying I accept the proposal on a conceptual basis. However, over the past 1-2 months, I have formed at least a preliminary idea of what I believe the answer to the hard problem will look like, and I think that proposals such as those summarized by you above will be integral components, individually necessary but insufficient. Dr. Treehub's proposal appears consistent with some part of what I currently believe to be a plausible explanation of consciousness, and warrants further discussion and critique. However, I do not yet accept Dr. Treehub's proposal, since I have not yet had time to appropriately evaluate and critique it.

      I don't believe the hard problem of feeling will be "solved" by a concise reductionist solution (a singular physical, yet undiscovered NCC), nor will it be answered at all if we don't reify the concept of feeling from something that is unavailingly subjective and distinct from doing (and thus inaccessible to scientific observation and experimentation). For these purposes, we should turn to hermeneutic proposals which explain feeling not in the subjective sense, but explain feeling by the mechanisms which contribute to its production — explain feeling in terms of doing since doing causes (causes - not correlates with) feeling. I'll expound on this in our discussion thread on your talk, as I think these two issues at this point coalesce…

    23. Stevan: "'Perspectival representations'are fine; you can have them in a robot (even 'volumetric' ones, pace @Arnold!)."

      This is an important question of fact, Stevan. Please give us one example of an existing robot that contains an analog representation of the volumetric space in which it exists and has a representation of a part of itself as the locus of perspectival origin within this volumetric space.

      Roberto: "— explain feeling in terms of doing since doing causes (causes - not correlates with) feeling."

      Yes, indeed! Stevan makes an incoherent claim when he allows that particular kinds of brain doings must cause feelings (conscious content), but that brain doings cannot explain how brain doings are felt. It is his strong intuition that there is no kind of brain activity that IS feeling. My strong intuition is that feeling must BE a particular kind of brain activity. I have specified the kind of brain activity that is necessary and sufficient to constitute feeling (activation of retinoid space), and I have presented empirical evidence in support of the retinoid model of consciousness. Stevan apparently believes that his intuition trumps empirical evidence.

    24. Stevan: "Mentalistic hermeneutics creates a hall of mirrors in which you read off exactly what you have projected into it, forgetting that the source is you."

      Isn't this true of all human judgement? Surely you don't exempt yourself from this human dilemma. Science does a reasonably good job of compensating for our personal idiosyncrasies by demanding empirical evidence to support our personal guesses about how the world works. What empirical evidence supports your guess that human feelings are not the doings of a human brain?


      @ Roberto: "I don't believe the hard problem of feeling… is answered… if we don't reify the concept of feeling from something that is unavailingly subjective and distinct from doing (and thus inaccessible to scientific observation and experimentation). For these purposes, we should turn to hermeneutic proposals which explain feeling not in the subjective sense, but explain feeling by the mechanisms which contribute to its production — explain feeling in terms of doing since doing causes (causes - not correlates with) feeling."

      Of course we could interpret feeling a migraine as doing something (we can interpret a grain of sand as doing something!), but it's a bit harder to interpret why doing that something (whatever it is) should feel like a migraine (or feel like anything). I'm not sure what you mean by "reifying" a feeling, but the tougher part seems to be to sentimentalize a doing. Hermeneutics, of course, knows no bounds. One can interpret anything as anything. A causal explanation, though, is not so easy…


      That the brain must cause feeling, somehow, is a belief most of us share. A causal explanation would explain how, and why. That's what's missing. Interpreting a "perspectival/volumetric representation" as "feeling" does not tell you how or why, it's just a just-so story (as all interpretations are).

    26. Stevan: "Interpreting a "perspectival/volumetric representation" as "feeling" does not tell you how or why"

      1. I agree that my working definition of feeling does not, in itself, tell us how the brain causes feelings, but the neuronal structure and dynamics of the mechanisms in the brain's retinoid system do tell us HOW. And the unique adaptive value of feeling as a coherent global representation of the world from an egocentric perspective does tell us WHY.

      2. Stevan, you haven't yet given us an example of an existing robot that contains an analog representation of the volumetric space in which it exists and has a representation of a part of itself as the locus of perspectival origin within this volumetric space. Do you still claim that such a robot exists?


      The turtle robots are not there yet, but they're coming along: Do you see any principled reason they could not be scaled up to volumetric/perspectival representations?

      Because if you can actually say what it is in the retinoid model that could not possibly be implemented in a robot, that might move us further forward. (The flip side of the hard problem is "How and why are we not -- and could not be -- zombies?")

    28. Stevan: "The turtle robots are not there yet, but they're coming along: Do you see any principled reason they could not be scaled up to volumetric/perspectival representations?"

      Turtle robots have no representation of space, let alone an analog representation of the volumetric space in which they exist with a representation of a part of themselves as the locus of perspectival origin within this volumetric space. It is not just a matter of the scale of the machine; it is the need for a mechanism that has an analog representation of the volumetric space around it from a fixed locus of perspectival origin.

      I can't say that it is impossible for the retinoid model to be implemented in a robot. But many years ago I had direct experience designing complex electro-mechanical systems and I think it would be extremely difficult to build a working retinoid system with current technology. To get an idea of what would be involved, see MODELING THE WORLD, here:

      and OBJECT RELATIONS, here:

  4. Firstly, I was very interested by the idea that the cortex is "awake" during slow wave sleep at the peak of the delta wave. This is intuitively acceptable because of the nature of an EEG. However, did you mean this only in a superficial and literal sense, or actually that *thoughts* might actively appear in the mind of a person sleeping? I have heard that Night Terrors typically occur during Stage 4 rather than REM sleep, typically associated with dreaming.

    In a similar vein, your proposal that consciousness is not localized to one area, as has been found with the cases of attention/working memory, is also intriguing because a devoted structure for consciousness seems unlikely. This, I say because, just as sensory memory is at least partially found in the neuronal connections of the relevant association areas, it is again likely (on a common-sense level only) that the sort of multimodal integration and selection cannot be limited to any one structure in the brain (except the thalamus!). The question I see here is: is the observed cortical gamma-wave synchrony present in conscious action consciousness itself (i.e. GWT) or the "steam whistle"?

    1. This is a very hot research area, and I would strongly recommend searching for the latest and greatest findings. I'm not interested in making claims (where I can't run the experiments myself) but only in reporting what others have found. Check out Massimini and Tononi, and recently Rosanova, Laureys, and Massimini and Tononi. There are others.

      There are many reports of mentation during slow-wave sleep. Another PubMed search will turn those up. SWS mentation is said to be much less fantastical than classical dreams.

      My 1988 book, last chapter, suggests six "necessary conditions" for consciousness, which are not reducible to ONLY GWT, or ONLY gamma, or whatever. Since 1988 we've picked up some more brain conditions. Please see the two textbooks by Baars & Gage, or other solid sources.

      Remember, this is the Himalayas of empirical questions. You have to do a lot of walking before it all becomes clear...

  5. I very much agree with the aspects of global workspace dynamics which allow for multiple loci/types of consciousness we all possess (and much stressed by Dr. Harvey). The example of HM mentioned is particularly illustrative of exactly why this type of system may be plausible, and the advantage of having a distributed but integrated neural network underlying these many forms of consciousness.

    It was rightly pointed out that HM was still conscious, despite the rather drastic bilateral MTL transection. However, in the decades intervening his surgeries and death, he was unable to hold any experience in his consciousness (in this case, long-term memory) once the moment in question had passed. He was fully functioning and conscious with respect to the moment he was in, or the decades prior to his operations. He could even learn new motor skills, or recall the simple number sequence (5-8-4) until his attention was shifted; but still, the aspects of his consciousness afforded by the MTL were irrevocably lost.

    1. Right. I think you probably did not mean "he was unable to hold any experience in his consciousness (in this case, long-term memory)." Stored information in LTM is not conscious. We need to retrieve it in order for it to become conscious.

  6. The theory presented by Dr. Baars that different brain regions could participate in consciousness (in the loose sense of the word) by contributing different conscious contents is very appealing.

    Since some of these regions are more developed in humans/primates compared to other mammals (PFC for example) and presumably they contribute different conscious contents compared to evolutionarily older structures, I wonder which was first - the need to have these contents in our consciousness or the rapid growth of the appropriate structures with evolution (I know it's kind of chicken-or-egg question!).

    1. Yes, it's chicken and egg, because we don't know the intermediate forms in evolution. Over 200 million years of mammalian evolution we just have some spotty leftovers. It is also likely that evolutionary changes are commonly punctuated, when entire stretches of DNA move or are deleted, or are even "borrowed" from viruses or symbionts (like the mitochondria). In recent human evolution the "funnel beaker culture" of north west Europe is thought to have changes substantially over a short period of time by means of jumpy evolution.

      There's a lot of thinking that consciousness predates the mammalian cortico-thalamic system, and that there are important parallels in birds and cephalopods. Precisely how that could be is beginning to be a hot topic.

      This is fun stuff.

  7. Is it possible to have the download link to the mp4 file instead of the youtube link, please ? Thank you

  8. I don't know how to do it, but I'm always looking for IT-savvy helpers!

  9. Dr. Baars makes a powerful empirical case for GWT. But I just want to raise one tiny philosophical issue: there is a huge debate among materialist philosophers as to whether we should try to give a reductive explanation of phenomenal consciousness at the intentional/cognitive level or at the neurobiological level. (In a nutshell: people who choose the first option believe that all that is required for a creature to have conscious states is some kind of information-processing system that functions in some specific way (regardless of how it is implemented physically). People who choose the second option will say that for a creature to have conscious states certain physical events that can be described at the neurobiological level (e.g. neurons oscillating at some specific frequency) must occur in its brain.) His talk did not address this issue directly and he did not mention the notion of NCC. So it is not clear to me which of those two views he favors. His 1988 book includes the term ‘cognitive’ in its title, so it seems to suggest the first option. But I have also noticed that in their 2011 paper Lau and Rosenthal keep referring to GWT as 'neuronal GWT'. So which option does he favor?

  10. Originally posted on facebook
    Sossin: "[Aplysia have] no need for firing in the absence of inputs" - could this be a clue to our self-awareness? Our neurons are supercharged and MUST discharge even in the absence of stimulus or motor control, so it accidentally turned into self-awareness... It would then be just a spandrel of our powerful brain? No function needed to explain it.

    It's a very interesting point. He said that consciousness may require a certain minimal brain size. If we follow your logic it might not be the size of the brain as much as how much it is "occupied" by what it has to process that could bring consciousness?

  11. Xavier Dery ‏@XavierDery

    Baars talk: the theatre analogy for relating consciousness to general brain works seems to me very elegant and really useful! #TuringC

    12:39 PM - 4 Jul 12 via Twitter for Android

  12. I feel like Dr. Baars staged a concept at the beginning of his talk that wasn't explicitly integrated with the rest of his presentation. Specifically, he discussed evolutionary pressures regarding the limited capacity and compensatory value of consciousness. While the subsequent content (biology of global workspace theory) is clearly related to his introduction, I don't think it was ever contextualized evolutionarily nor was the compensatory value made explicit. It seems as though this first section was more related to the earlier title of his talk.

    There are a few benefits of a “global workspace”-style consciousness that occur to me. For one, it provides a powerful ability to integrate disparate sources of information. It seems this would facilitate rapid and flexible learning and generation of myriad behavioral contingencies. The modularity also affords robust functioning in the face of brain injury (though one could argue that this is a general property of the brain, not the global workspace specifically).

    Are my conclusions valid? Would Dr. Baars or other interested parties care to answer the issues I feel were unaddressed?

    I should mention that I am nitpicking, and I found the talk quite fascinating.