tag:blogger.com,1999:blog-2234592903154254594.post8340859148503163680..comments2023-10-07T07:20:25.103-07:00Comments on Turing Consciousness 2012: Bernard Baars: Psycho-Biological Risks/Benefits of ConsciousnessUnknownnoreply@blogger.comBlogger48125tag:blogger.com,1999:blog-2234592903154254594.post-2060644475693333542012-08-16T07:56:25.431-07:002012-08-16T07:56:25.431-07:00Stevan: "The turtle robots are not there yet,...Stevan: "The turtle robots are not there yet, but they're coming along: Do you see any principled reason they could not be scaled up to volumetric/perspectival representations?"<br /><br />Turtle robots have no representation of space, let alone an analog representation of the volumetric space in which they exist with a representation of a part of themselves as the locus of perspectival origin within this volumetric space. It is not just a matter of the scale of the machine; it is the need for a mechanism that has an analog representation of the volumetric space around it from a fixed locus of perspectival origin.<br /><br />I can't say that it is impossible for the retinoid model to be implemented in a robot. But many years ago I had direct experience designing complex electro-mechanical systems and I think it would be extremely difficult to build a working retinoid system with current technology. To get an idea of what would be involved, see MODELING THE WORLD, here:<br /><br />http://people.umass.edu/trehub/thecognitivebrain/chapter4.pdf<br /><br />and OBJECT RELATIONS, here:<br /><br />http://people.umass.edu/trehub/thecognitivebrain/chapter7.pdf<br /><br /><br /><br /><br />arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-78968951435800477422012-08-15T16:04:31.702-07:002012-08-15T16:04:31.702-07:00TURTLES ALL THE WAY UP
The turtle robots are not ...<b>TURTLES ALL THE WAY UP</b><br /><br />The <a href="http://en.wikipedia.org/wiki/Turtle_(robot)" rel="nofollow">turtle robots</a> are not there yet, but they're coming along: Do you see any principled reason they could not be scaled up to volumetric/perspectival representations?<br /><br />Because if you can actually say what it is in the retinoid model that could not possibly be implemented in a robot, that might move us further forward. (The flip side of the hard problem is "How and why are we not -- and could not be -- zombies?")Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-4855415273508606172012-08-15T07:40:21.942-07:002012-08-15T07:40:21.942-07:00Stevan: "Interpreting a "perspectival/vo...Stevan: "Interpreting a "perspectival/volumetric representation" as "feeling" does not tell you how or why"<br /><br />1. I agree that my working definition of feeling does not, in itself, tell us how the brain causes feelings, but the neuronal structure and dynamics of the mechanisms in the brain's retinoid system do tell us HOW. And the unique adaptive value of feeling as a coherent global representation of the world from an egocentric perspective does tell us WHY.<br /><br />2. Stevan, you haven't yet given us an example of an existing robot that contains an analog representation of the volumetric space in which it exists and has a representation of a part of itself as the locus of perspectival origin within this volumetric space. Do you still claim that such a robot exists?<br /><br />arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-2377185679028093462012-08-14T19:47:49.464-07:002012-08-14T19:47:49.464-07:00SENTIMENTALIZING DOINGS
@ Roberto: "I don...<b>SENTIMENTALIZING DOINGS</b><br /><br />@ Roberto: "I don't believe the hard problem of feeling… is answered… if we don't reify the concept of feeling from something that is unavailingly subjective and distinct from doing (and thus inaccessible to scientific observation and experimentation). For these purposes, we should turn to hermeneutic proposals which explain feeling not in the subjective sense, but explain feeling by the mechanisms which contribute to its production — explain feeling in terms of doing since doing causes (causes - not correlates with) feeling."<br /><br />Of course we <i>could</i> interpret feeling a migraine as doing something (we can interpret a grain of sand as doing something!), but it's a bit harder to interpret why doing that something (whatever it is) should feel like a migraine (or feel like anything). I'm not sure what you mean by "reifying" a feeling, but the tougher part seems to be to sentimentalize a doing. Hermeneutics, of course, knows no bounds. One can interpret anything as anything. A causal explanation, though, is not so easy…<br /><br />@Arnold <br /><br />That the brain must cause feeling, somehow, is a belief most of us share. A causal explanation would explain how, and why. That's what's missing. Interpreting a "perspectival/volumetric representation" as "feeling" does not tell you how or why, it's just a just-so story (as all interpretations are). <br />Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-61709319282126836502012-08-14T08:41:40.100-07:002012-08-14T08:41:40.100-07:00Stevan: "Mentalistic hermeneutics creates a h...Stevan: "Mentalistic hermeneutics creates a hall of mirrors in which you read off exactly what you have projected into it, forgetting that the source is you."<br /><br />Isn't this true of all human judgement? Surely you don't exempt yourself from this human dilemma. Science does a reasonably good job of compensating for our personal idiosyncrasies by demanding empirical evidence to support our personal guesses about how the world works. What empirical evidence supports your guess that human feelings are not the doings of a human brain?<br /><br /><br />arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-26291281844704453552012-08-13T08:53:03.908-07:002012-08-13T08:53:03.908-07:00Stevan: "'Perspectival representations...Stevan: "'Perspectival representations'are fine; you can have them in a robot (even 'volumetric' ones, pace @Arnold!)."<br /><br />This is an important question of fact, Stevan. Please give us one example of an existing robot that contains an analog representation of the volumetric space in which it exists and has a representation of a part of itself as the locus of perspectival origin within this volumetric space. <br /><br />Roberto: "— explain feeling in terms of doing since doing causes (causes - not correlates with) feeling." <br /><br />Yes, indeed! Stevan makes an incoherent claim when he allows that particular kinds of brain doings must cause feelings (conscious content), but that brain doings cannot explain how brain doings are felt. It is his strong intuition that there is no kind of brain activity that IS feeling. My strong intuition is that feeling must BE a particular kind of brain activity. I have specified the kind of brain activity that is necessary and sufficient to constitute feeling (activation of retinoid space), and I have presented empirical evidence in support of the retinoid model of consciousness. Stevan apparently believes that his intuition trumps empirical evidence.<br /><br /><br />arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-89159083344387640492012-08-12T17:55:30.558-07:002012-08-12T17:55:30.558-07:00With all due respect, you've moved too quickly...With all due respect, you've moved too quickly in saying I accept the proposal on a conceptual basis. However, over the past 1-2 months, I have formed at least a preliminary idea of what I believe the answer to the hard problem will look like, and I think that proposals such as those summarized by you above will be integral components, individually necessary but insufficient. Dr. Treehub's proposal appears consistent with some part of what I currently believe to be a plausible explanation of consciousness, and warrants further discussion and critique. However, I do not yet accept Dr. Treehub's proposal, since I have not yet had time to appropriately evaluate and critique it. <br /><br />I don't believe the hard problem of feeling will be "solved" by a concise reductionist solution (a singular physical, yet undiscovered NCC), nor will it be answered at all if we don't reify the concept of <i>feeling</i> from something that is unavailingly subjective and distinct from <i>doing</i> (and thus inaccessible to scientific observation and experimentation). For these purposes, we <i>should</i> turn to hermeneutic proposals which explain feeling not in the subjective sense, but explain feeling by the mechanisms which contribute to its production — explain feeling in terms of doing since doing causes (causes - not correlates with) feeling. I'll expound on this in our discussion thread on your talk, as I think these two issues at this point coalesce…Anonymoushttps://www.blogger.com/profile/07886708918583397226noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-72629342221606037822012-08-12T13:26:40.420-07:002012-08-12T13:26:40.420-07:00THE HERMENEUTIC HALL OF MIRRORS
@Roberto, what I ...<b>THE HERMENEUTIC HALL OF MIRRORS</b><br /><br />@Roberto, what I am rejecting on a "merely" conceptual basis, you are <i>accepting</i> on a merely conceptual basis! <br /><br />And in doing so I am afraid you are missing the point, which is certainly a conceptual one: <br /><br />"Perspectival representations" are fine; you can have them in a robot (even "volumetric" ones, <i>pace</i> @Arnold!). But why should a representation be <i>felt</i> just because it's perspectival or volumetric?<br /><br />I know it's discouraging to have every hopeful starter rejected, but the problem is not called "hard" for nothing. You can bet that if there <i>is</i> a solution, it's not going to be an easy (and question-begging) one like "perspectival representations."<br /><br />What makes simple solutions look like they work is almost always <i>hermeneutics</i> -- which means <i>interpreting something in terms of something else</i>: <br /><br />We know what it feels like to feel. Feeling is perspectival. Feelers have a "point of view." <br /><br />(Add a few redundant mentalistic adjectives: a <i>subjective</i>, "<i>1st-person</i>" point of view. For good measure, call it a "conscious experience", of which you are "aware." Add that it is "aspectual" and has a "phenomenal character": an incantation of "qualia" sometimes helps too.) <br /><br />And when you've done all that, add that you have a perspectival model -- and presto, you have a solution to the hard problem.<br /><br />The naive little niggler that always gets forgotten, though, is: <br /><br />"Yes, I can <i>interpret</i> your model's properties <i>as if</i> they were felt properties, but you forgot to tell me how and why they were felt: Because otherwise they are simply encoded properties, and enacted properties, as in a robot (in other words, <i>doings</i>). They may have all the objective features you attribute to them -- doings, all -- but how and why are they <i>felt</i>?"<br /><br />Mentalistic hermeneutics creates a hall of mirrors in which you read off exactly what you have projected into it, forgetting that the source is you.<br /><br />(By the way, Arnold Trehub's "perspectival/volumetric" hermeneutics are very similar to Bjorn Merker's "ego-centre" hermeneutics. And with his "global workspace," Bernie Baars is -- to pinch a quip from <a href="http://www.holybooks.com/wp-content/uploads/Appearance-and-Reality-by-FH-Bradley.pdf" rel="nofollow">Bradley</a> on metaphysics -- "a brother hermeneutician with a rival interpretation". Ditto for Shimon Edelman's "temporal integration" hermeneutics and Antonio Damasio's homeostatic hermeneutics, and for that matter, it would be true of Dan Dennett's "heterophenomenology" too, right down to the last JND -- if it weren't for the fact that Dan is actually denying the existence of feelings altogether, as not being anything but the doings of heterophenomenology...)Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-54753569993858685202012-08-12T11:53:54.265-07:002012-08-12T11:53:54.265-07:00I must say that I identify with Dr. Treehub's ...I must say that I identify with Dr. Treehub's sentiments here. I admit that I have not critically evaluated the all of the empirical evidence Dr. Treehub has cited in support of his theory of retinoid space, but the conceptual framework he has described does attempt to address how feeling is generated and represented in the brain. In any case, the empirical support does not seem to be addressed in the above exchange, and Dr. Harnad dismisses the model on a merely conceptual basis <i>("Why do internal representations have to be felt, rather than just representing?").</i><br /><br />To address Dr. Treehub's points at their fundamental level, he believes we form perspectival and subjectively-unique representations of the world and ourselves in a network of neurons that he calls "retinoid space"; this is the basis of subjectivity of feeling. (Whether this is a single locus or a distributed network of neurons is conceptually irrelevant, though this would need to be addressed in a complete explanation of how we feel.)<br /><br />This type of subjectively-contextual neuronal network can operate independently or in concert with other brain systems (e.g., the motor control of respiratory muscles can occur independent of the retinoid space during autonomous breathing, or bound to the activity of neurons which compose the retinoid space). The binding of the retinoid space (or whatever you'd like to call the neuronal basis of subjective feeling) with myriad brain functions (other doings) could occur through mechanisms described by the global workspace theory of Dr. Baars, and/or the supramodular interaction theory of Dr. Morsella, etc. <br /><br />Before dismissing this by asking why we need subjective feeling in such a system as described above, rather than just autonomous doing, I'd appreciate your opinion on this: could such a system exist without producing feeling out of these doings?Anonymoushttps://www.blogger.com/profile/07886708918583397226noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-18301008500610738142012-08-10T18:10:21.787-07:002012-08-10T18:10:21.787-07:00I feel like Dr. Baars staged a concept at the begi...I feel like Dr. Baars staged a concept at the beginning of his talk that wasn't explicitly integrated with the rest of his presentation. Specifically, he discussed evolutionary pressures regarding the limited capacity and compensatory value of consciousness. While the subsequent content (biology of global workspace theory) is clearly related to his introduction, I don't think it was ever contextualized evolutionarily nor was the compensatory value made explicit. It seems as though this first section was more related to the earlier title of his talk.<br /><br />There are a few benefits of a “global workspace”-style consciousness that occur to me. For one, it provides a powerful ability to integrate disparate sources of information. It seems this would facilitate rapid and flexible learning and generation of myriad behavioral contingencies. The modularity also affords robust functioning in the face of brain injury (though one could argue that this is a general property of the brain, not the global workspace specifically).<br /><br />Are my conclusions valid? Would Dr. Baars or other interested parties care to answer the issues I feel were unaddressed?<br /><br />I should mention that I am nitpicking, and I found the talk quite fascinating.Anonymoushttps://www.blogger.com/profile/13795726068676548183noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-35555220573055001702012-08-03T07:56:59.874-07:002012-08-03T07:56:59.874-07:00Stevan: "What you are describing is a system ...Stevan: "What you are describing is a system that can behave adaptively in space. It needs sensors to detect, information processing, some dynamics, and effectors to respond. Even today's simple robots do some of that."<br /><br />You have missed the most relevant aspect of the retinoid model. Retinoid space is a volumetric analog of the space that exists around you, and it includes a fixed locus of perspectival spatio-topic origin -- the self-locus (I!) in the retinoid theory of consciousness/feeling. Also, retinoid space does NOT need sensors to give you a sense of being *here* in a surround. Sensory projections add phenomenal content/feelings to the subjective primitive of feeling as the origin of all experience. I should add that there is no known artifact, robot or computer, that contains an analog representation of the volumetric space in which it exists containing a representation of any of its parts.<br /><br />Before we close our exchanges, Stevan, would you tell us what kind of scientific theory and evidence would, in your opinion, count as an explanation of how and why we feel.arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-50902457050814135982012-08-02T16:51:26.607-07:002012-08-02T16:51:26.607-07:00Arnold, this is unfortunately getting too repetiti...Arnold, this is unfortunately getting too repetitious. Neither of us is providing any new information. What you are describing is a system that can behave adaptively in space. It needs sensors to detect, information processing, some dynamics, and effectors to respond. Even today's simple robots do some of that. <br /><br />You haven't given a hint of a hint about why any of that should be felt, whether it happens inside a robot, or inside an organism with a brain.<br /><br />Now I will let you have the last word, but I will no longer respond unless something new and substantive comes up.Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-1162188632359917622012-08-02T07:50:27.802-07:002012-08-02T07:50:27.802-07:00Stevan: "One cannot just posit that a neural ...Stevan: "One cannot just posit that a neural system feels, and then ask for counter-arguments. One has to explain how and why it feels, rather than just does whatever it does, unfeelingly."<br /><br />I did explain how and why one feels in my post above of 23 July 2012. Apparently you are still puzzled. Let me approach the problem from a different angle. If we are to explain how and why we feel, we must offer an overt description of what it is like to have any kind of feeling.<br /><br />WHAT IS IT LIKE TO BE? IT IS TO FEEL LIKE YOU EXIST IN A SURROUNDING SPACE.<br /><br />This is your primitive phenomenal world. But with no sensory transducers to detect the space around you, how can you feel that you are in a space? This is the astonishing aspect of feeling that is the key to understanding how and why we feel! The neuronal structure and dynamics of retinoid space provides an innate brain representation of the space we live in, and it is organized around a fixed locus of spatio-temporal perspectival origin which is our self-locus -- our core self (I!). This special kind of neuronal brain mechanism CONSTITUTES *subjectivity*, which is the fundament of all feeling/consciousness. So autaptic-cell activity in retinoid space CAUSES FEELING and NOTHING ADDITIONAL HAS TO BE ACCOUNTED FOR AS A GENERATOR OF FEELING. The WHY question is also answered in my 23 July post. You should note that all of this is not mere speculation as there is a very large body of empirical findings that support the retinoid theory of subjectivity/consciousness/feeling. The bottom line is that the retinoid system CANNOT DO WHAT IT DOES UNFEELINGLY!<br /><br />Stevan, if you still believe that some additional kind of brain activity is needed to account for subjectivity/feeling, then please tell us what you think it is and how its properties might be empirically tested.arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-76312197852265366432012-08-01T16:32:57.754-07:002012-08-01T16:32:57.754-07:00Arnold: Why and how is "any representation th...Arnold: Why and how is "any representation that is located in retinoid space" felt? I assume this is the activity of some neural system, actual or theoretical. But it is not enough to just say it is so: Why is retinoid activity felt? How does retinoid activity generate feeling? <br /><br />One cannot just posit that a neural system feels, and then ask for counter-arguments. One has to explain how and why it feels, rather than just does whatever it does, unfeelingly.Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-66240742174774597642012-07-31T11:23:43.071-07:002012-07-31T11:23:43.071-07:00Xavier Dery @XavierDery
Baars talk: the theatre ...Xavier Dery @XavierDery<br /><br />Baars talk: the theatre analogy for relating consciousness to general brain works seems to me very elegant and really useful! #TuringC<br /><br />12:39 PM - 4 Jul 12 via Twitter for AndroidXavier Déryhttps://www.blogger.com/profile/06524854063079690566noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-16312255298010073552012-07-25T12:17:45.423-07:002012-07-25T12:17:45.423-07:00Originally posted on facebook
ERIC MUSZYNSKI:
Soss...Originally posted on facebook<br />ERIC MUSZYNSKI:<br />Sossin: "[Aplysia have] no need for firing in the absence of inputs" - could this be a clue to our self-awareness? Our neurons are supercharged and MUST discharge even in the absence of stimulus or motor control, so it accidentally turned into self-awareness... It would then be just a spandrel of our powerful brain? No function needed to explain it.<br /><br />MARJORIE MORIN:<br />It's a very interesting point. He said that consciousness may require a certain minimal brain size. If we follow your logic it might not be the size of the brain as much as how much it is "occupied" by what it has to process that could bring consciousness?Marjorie Morinhttps://www.blogger.com/profile/18128987553493264083noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-29641273090167102662012-07-24T07:40:53.782-07:002012-07-24T07:40:53.782-07:00Stevan: "Why do internal representations have...Stevan: "Why do internal representations have to be felt, rather than just representing?"<br /><br />There is only one kind of internal representation that is FELT; it is any representation that is located in retinoid space, which is the phenomenal world in our extended present. All other internal representations are NOT felt. Each of our different sensory modalities may contain representations in its synaptic matrices, but these distinct representations remain PRE-CONSCIOUS/UNFELT until they are projected into retinoid space, via recurrent axonal excitation, and bound in proper spatio-temporal register where they become SOMETHING SOMEWHERE in our phenomenal world in perspectival relation to our core self (I!). This is the SUBJECTIVE DOING of the retinoid system that CONSTITUTES feeling/phenomenal experience.<br /><br />What are the counter arguments?<br />.......................................................................................................<br /><br />As an aside, I want to express my thanks to you, Stevan, for providing this sky-writing platform where we can actively engage in detailed discussion about consciousness, the most significant and vexing problem in science. This is the kind of back-and-forth that is needed.arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-14081179861354026722012-07-23T17:49:20.080-07:002012-07-23T17:49:20.080-07:00Why do internal representations have to be felt, r...Why do internal representations have to be felt, rather than just representing?Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-52395855282322268672012-07-23T08:17:52.018-07:002012-07-23T08:17:52.018-07:00THE RETINOID THEORY IS A CAUSAL EXPLANATION OF HOW...THE RETINOID THEORY IS A CAUSAL EXPLANATION OF HOW AND WHY WE FEEL (have conscious experience)<br /><br />Stevan: "A scientific theory gives a testable causal explanation of the evidence."<br /><br />Consider the following:<br /><br />1. I am conscious if and only if I have a *sense of being here with something all around me even though the particulars are constantly changing*. Call this the minimal conscious content (MCC).<br /><br />2. MCC must be the product of an active brain. Given this stipulation, I have proposed, as a working definition, that consciousness (MCC) is a transparent brain representation of the world (the space that is all around me) from a privileged egocentric perspective (me here).<br /><br />HOW:<br /><br />1. What system of mechanisms in the brain has the competence to cause MCC? I have proposed that the human brain has a system of neuronal brain mechanisms with the structure and dynamics that can represent a global volumetric spatiotopic analog of the world space we live in, including a fixed locus of perspectival origin that I call the *core self* (I!). This part of the retinoid system is called RETINOID SPACE. I have specified the minimal structure and dynamics of the brain system that regulates the content of retinoid space and call it the RETINOID SYSTEM.<br /><br />2. Why should we think that the retinoid system is a competent causal model of MCC? It seems clear that any competent model should be able to make relevant predictions that can be tested and are empirically validated. One thing we should NOT expect is that a competent causal model must be able to exhibit ALL the properties of MCC. (I think this unwarranted expectation plays a part in the "explanatory gap" notion in consciousness studies.) What we should expect is that the candidate model of MCC be able to generate matching *analogs* of relevant properties of the phenomena.<br /><br />3. In a wide range of empirical tests, the operating characteristics of the retinoid model successfully predicted/explained previously inexplicable conscious phenomena/feelings, and also successfully predicted novel conscious phenomena. Among many examples are hemi-spatial neglect, seeing-more-than-is-there (SMTT), Julesz random-dot stereograms, the pendulum illusion, 3D experience from 2D perspective drawings, the moon illusion, the Pulfrich effect, etc.<br /><br />Why:<br /><br />Organisms without consciousness/feeling do not have an internal global representation of objects and events in the world they live in, and can only respond to the immediate exigencies of their environment with reflexive adaptation. Conscious creatures, on the other hand, do have internal representations of the world they live in and gain an evolutionary advantage by being cognizant of the objects and events in the world with affordances for their survival and flourishing. In humans, consciousness also enables the imaginative and practical reconstruction of the world we live in.arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-28219921497389098182012-07-22T14:40:48.308-07:002012-07-22T14:40:48.308-07:00CORRELATION VS CAUSATION
A scientific theory give...<b>CORRELATION VS CAUSATION</b><br /><br />A scientific theory gives a testable causal explanation of the evidence. <br /><br />A Just-So Story just gives a causal interpretation (hermeneutics), viz: <br /><br />"Why do plants grow toward the sun?" Because of a phototropic force. <br /><br />"Why do organisms feel?" Because of activity in this neural system...Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-6498509702628434922012-07-22T11:41:57.755-07:002012-07-22T11:41:57.755-07:00*Some stories help us explain/understand conscious...*Some stories help us explain/understand consciousness/feeling. Other stories obfuscate our understanding of consciousness/feeling.*<br /><br />Stevan, what is the difference between a "Just-So story" and a scientific theory?arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-18194661200847938242012-07-22T09:21:36.938-07:002012-07-22T09:21:36.938-07:00HARD PROBLEMS NEED SUBSTANTIVE SOLUTIONS (Reply to...<b>HARD PROBLEMS NEED SUBSTANTIVE SOLUTIONS</b> (Reply to multiple commentaries)<br /><br />The "hard" problem is not a metaphysical one, and declaring oneself to be a card-caring monist does not solve it. Nor does "operationalizing" the measurement of feeling. (That's the other-minds problem, and the Turing Test -- T2, T3 or T4 -- is the best we'll ever get.)<br /><br />Nor does one's monist-card do away with the doing/feeling dichotomy: Yes, the brain must cause feeling, somehow, for some adaptive or functional reason. And causing is doing. But the trouble is that <i>we don't know how, and we don't know for what adaptive of functional reason</i>. And explaining <i>that</i> is the hard problem. It cannot be hand-waved away by blurring distinctions or invoking monism or telling Just-So stories...Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-80900701939862117172012-07-21T15:12:36.075-07:002012-07-21T15:12:36.075-07:00Dr. Baars makes a powerful empirical case for GWT....Dr. Baars makes a powerful empirical case for GWT. But I just want to raise one tiny philosophical issue: there is a huge debate among materialist philosophers as to whether we should try to give a reductive explanation of phenomenal consciousness at the intentional/cognitive level or at the neurobiological level. (In a nutshell: people who choose the first option believe that all that is required for a creature to have conscious states is some kind of information-processing system that functions in some specific way (regardless of how it is implemented physically). People who choose the second option will say that for a creature to have conscious states certain physical events that can be described at the neurobiological level (e.g. neurons oscillating at some specific frequency) must occur in its brain.) His talk did not address this issue directly and he did not mention the notion of NCC. So it is not clear to me which of those two views he favors. His 1988 book includes the term ‘cognitive’ in its title, so it seems to suggest the first option. But I have also noticed that in their 2011 paper Lau and Rosenthal keep referring to GWT as 'neuronal GWT'. So which option does he favor?Anonymoushttps://www.blogger.com/profile/12747519028502353388noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-73591235143826753272012-07-17T08:45:53.235-07:002012-07-17T08:45:53.235-07:00Hi Bernie,
You wrote: "I argue that observin...Hi Bernie,<br /><br />You wrote: "I argue that observing ego functions are coextensive with contextual frames for qualitative experiences ..."<br /><br />This has puzzled me for a long time. I think of the *ego* as the core self (I!), a cluster of neurons that constitute the perspectival locus of spatio-temporal origin within our phenomenal space (retinoid space in my theoretical model of consciousness). It seems to me that the long-held notion of the ego/self as an *observer* has been a serious stumbling block in our understanding of consciousness. In detailing the plausible neuronal mechanisms that might generate our conscious content, I found that the ego/core self could not have the biological machinery needed to be an observer and, at the same time, function as the fixed perspectival origin of all conscious experience. The only way that my theoretical model of the cognitive brain could work effectively was to have observing mechanisms in the synaptic matrices among all the pre-conscious sensory modalities. So *observation* could not be an ego function. The role of the retinoid system is to *bind* the various pre-conscious sensory observations/features (as patterns of recurrent axonal projection) in proper spatio-temporal register within retinoid space, our phenomenal world. In this process, selective excursions of excitation over retinoid space that are induced by the core self/ego (heuristic self-loci) play a critical role. But this does not entail an observing ego. For more about this, see:<br /><br />http://people.umass.edu/trehub/thecognitivebrain/chapter7.pdf<br /><br />and<br /><br />http://theassc.org/documents/where_am_i_redux<br /><br />This way of thinking about the self/ego in relation to observation has enabled the explanation of many previously inexplicable conscious phenomena and the successful prediction of new experimental findings. I would be greatly interested in your thoughts about this formulation of *observation* in the retinoid theory of consciousness, Bernie.arnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.comtag:blogger.com,1999:blog-2234592903154254594.post-87305474721566338102012-07-16T11:38:24.689-07:002012-07-16T11:38:24.689-07:00Glad to oblige. Here are three recent publications...Glad to oblige. Here are three recent publications that you might look at:<br /><br />http://people.umass.edu/trehub/YCCOG828%20copy.pdf<br /><br />http://theassc.org/documents/where_am_i_redux<br /><br />http://evans-experientialism.freewebspace.com/trehub01.htmarnold Trehubhttps://www.blogger.com/profile/10019949314092142107noreply@blogger.com