The January 25, 2015 meeting on this topic has been postponed. Watch this space, or send me an email in February, to get the new date.
On [Date TBD], I will moderate a humanist discussion on so called “consciousness”. Given that the content will be of broad interest, I will post a few articles for the participants and other interested readers.
For many years, for several reasons, I argued against using the term “consciousness”. “Conscious”, an adjective, is a helpful term that can trigger meaningful psychological inquiry; but the term “consciousness” all too often interferes with the pursuit of understanding the human mind. It tends to induce a reification fallacy, i.e., to assume that because we have a noun (here consciousness) it must actually refer to something particular. Contrast dog, which has referents, and energy which does not. Energy, like gravity, is a helpful problem-centered concept, i.e., a concept that is used to frame and solve problems. Dog is helpful, but it is not problem-centered. It was not developed to solve a theoretical challenge of understanding the world. Treating consciousness as stuff-like can cause problems.
Here’s a little fact that might help you cautiously approach questions using the term “consciousness”: many languages don’t even have this term. They don’t make the same distinctions as English. Has the English speaking world necessarily happened upon something true that other cultures have ignored? Not likely. English makes it very easy to turn adjectives into nouns, and nouns into verbs. It doesn’t always work out well. Philosophers sometimes get stuck by this. English has several related terms: awareness, consciousness and attention. French does not make these distinctions. The primary meaning of “conscience” in French is the same as our English term “conscience”. French philosophers, not every day folk, tacked on a different, confusing set of meanings to this previously helpful term, i.e., “consciousness” (in the English senses of the term). Obscurity there, as in English, was the result. Also, French doesn’t have an equivalent to “awareness”. The terms “attention” and “awareness” work quite well in psychology. The concepts of consciousness trip so many people up that some editors, post behaviourism, still don’t want to see the term in their journals. What value does it add? (Conceptual analysis challenge: compare and contrast attention, awareness and consciousness.)
One should also be very leery of “What is … ?” questions. “What is free will?” “What is consciousness”, “What is the mind?” People who spend a lot of time discussing these questions tend to think that there’s an empirically correct answer. To deal with questions of meaning, one needs to engage in conceptual analysis. To try to answer such questions without sufficient grasp of conceptual analysis is normally an exercise in futility. I’m not suggesting that one necessarily needs to take a course in conceptual analysis or read the texts on it. I don’t think Einstein or Newton did this before they analyzed and formalized the concepts of energy and force, respectively. Nor do I suggest that conceptual analysis is the end point. However, as in so many areas of knowledge, training in the cognitive tools of the trade does help. So, I recommend reading and doing the exercises in Aaron Sloman’s free chapter on conceptual analysis before trying to analyze a difficult cluster of concepts and non-concepts such as “consciousness”.
Also, many people who talk about “consciousness” talk about qualia as “the hard problem of consciousness.” That, in my opinion, is overstated. Qualia are not the hard problem of consciousness. The real hard problems of consciousness are to try to get a machine to make the discriminations and to have the capabilities that these philosophers (many but not all of whom have never done AI programming) consider not to be hard. For some example of hard problems in visual perception, for instance, check out this paper. The issues of qualia are worth some reflection. However, I think it receives way more attention than they deserve in discussions of phenomena. We will not be able to address qualia properly until we have better models of the so called “easy” problems of “consciousness”. Dennett has done a good job of refuting the qualia folk. Perhaps the qualia folk will never stop until our progeny creates “AI” that is as convinced of its qualia as the qualia folk are of theirs.
Look at the history. Many philosophers (following Aristotle and his theory of the immaterial intellect) used to think that you couldn’t make a machine that thinks. But AI proved them wrong. William McDougall and many others thought that you couldn’t make a machine with goals and purposes. Then AI (and Maggie Boden, in Purposive Explanations in Psychology) proved them wrong. People used to think it was impossible for robots to have emotions. Then in 1981 Aaron Sloman published Why robots will have emotions and a swath of cognitive scientists started to develop computational theories of emotion. We don’t have complete theories of emotion yet, but we have enough of an understanding to see the possibility of robots with emotions.
Daniel Dennett wrote an important paper called “Why you can’t make a computer that feels pain” that is relevant to our concerns. (I highly recommend the paper.) The reason you can’t make a computer that feels pain is that the term “pain” means multiple things, some of which are inconsistent with the others. Here again the value of conceptual analysis shines through. One of the reasons that it is hard to understand “consciousness” is that there are incoherencies in the “concept”. I use scare quotes because there isn’t a single, well defined concept at play here. (This applies to many basic terms in cognitive science, including emotion.) That’s why I try to speak in terms of the concepts (plural), not the concept (singular), of consciousness.
One of the things I will do as moderator is to analyze the fallacies in a typical argument about the importance of qualia. Then I will try to veer the discussion to more productive questions.
You’ll notice that I have come round over the years: I do now use the term “consciousness” even on this blog. What do I mean by that term? I typically have in mind something akin to Baars’ notion of consciousness and lately, the theory developed by Merlin Donald in A mind so rare. The value of their work is not that it answers the question, “What is consciousness?” It is that it explores capabilities of the human mind, proposes designs that address these requirements, and leads to computer simulations of phenomena. Their work attempts to unify large swaths of results from cognitive science and points the way towards future research. It’s progressive in the sense specified by Imre Lakatos (but we will only be able to assess this properly in the future!)
In future posts in this thread, I will provide a list of questions, readings, podcasts and videos. I aim to help us frame questions about consciousness productively, i.e., in terms of problems and information processing theories. That means being able to assess questions (the good, the bad, the ugly) — a very important skill. This will involve reframing questions. Often the term “consciousness” will be replaced. One of the main goals is to gain a better understanding of the capabilities of the human mind, its major constituents, and their interrelations. To that end I will outline some tools in the cognitive scientist’s toolkit. If you have read Cognitive Productivity, then you’ll already be equipped with many of these tools.
What questions do you think are productive questions to ask about, or in terms of, “consciousness”? Find it tough? It is, in fact, a very tough question to answer without having a promising theory of the mind to begin with! And those are hard to come by.