AI, Cognitive Science and Understanding: Comments on RS-95 (Gerard O’Brien and The Computational Theory of Mind)

As always, I thoroughly enjoyed listening to the latest episode of Rationally Speaking Podcast. This one was On the Computational Theory of Mind, with guest Gerard O’Brien, philosopher of mind from the University of Adelaide. (Hosts: Massimo Pigliucci and Julia Galef.) Here are a few comments about this episode.

The goals of AI are manifold

The term “AI” is an unfortunate label, one that has misled many people, including, apparently Paul O’Brien. He said “Goals of AI and cognitive science are quite different. The goal of Artificial intelligence is, as it says, to construct artificially intelligent systems. And it doesn’t matter how you do it.” (48:10, emphasis mine.) O’Brien contrasts AI with cognitive science which, he says, is about understanding ourselves.

Now, if O’Brien were to reflect on AAAI conferences that he has attended in the past, he would, I am sure, agree with me that one cannot speak of “the goal of AI”. AI has manifold objectives. There are several concurrent AI research programmes. The founders of AI, and some of us still involved with it, have broader views.

As Prof. Aaron Sloman points out in an obit for John McCarthy, who coined the term “artificial intelligence”:

[John McCarthy] made one huge mistake, whose consequences will go on being harmful for a long time, namely naming the new field “Artificial Intelligence”, rather than, for example, “Computational Intelligence”, or the more cumbersome “Natural and Artificial Intelligence”. The mistake is puzzling insofar as it is clear that from the start his interests went far beyond just trying to make useful machines. He was trying to understand human intelligence as one example of a space of possible forms of intelligence, and he hoped that eventually we’ll be able to produce better forms than human intelligence.

The broader view to which Sloman and I subscribe is described in the Introduction by Programme Chair, AISB–93, Prospects for AI as the General Science of Intelligence.

It is unproductive (and misleading) to claim that cognitive science is about the human mind whereas AI is about artificial minds. We need a science of the space of all possible minds, in which human minds, other existing biological minds, possible biological minds on other planets, and all kinds of artificial minds can be studied together. Broad AI is this endeavour.

My beef is not with the “true” meaning of these labels. For it’s true that many AI researchers focus narrowly—some of them even misunderstand the big AI project. My proposal is to use more productive concepts. If you try to understand the actual (such as human intelligence) without sketching out the possible (the space of possible minds) you will have a very hard time of it. I did not invent the idea: it’s an old, key assumption in general AI. Einstein, I’m sure, held the same assumption about physics. One can’t do cognitive psychology properly without this broad approach.

When a cognitive psychologist posits a model of a particular mental phenomenon, she must venture into AI. In all likelihood, what she will develop is not an accurate model of the human capability. Her model may be refuted empirically or on engineering grounds. The false model may still contribute to more general AI: an exploration of possible minds.

The recent Atlantic article on Douglas Hofstadter also presents a broad view of Artificial Intelligence. Hofstadter complains that “his” type of AI is a lonely business. I agree. But he is not the only one who is involved in the general programme of AI.

Proof of the pursuit of more general AI projects is in the excellent, two-volume history of cognitive science by Margaret Boden (2006, 1631 p.). (Boden was my external Ph.D. thesis examiner in 1994.) : Mind as Machine: A history of cognitive science. The volume should be on very cognitive scientist’s bookshelf.

Understanding understanding does not hinge on understanding “consciousness”

Pigliucci asks, around 47:32 in the podcast, whether Watson (the computer) is intelligent in the sense that it understands something or whether it is just fast. That’s a good question, provided that one has a productive concept of understanding. Alas, understanding itself is a cluster concept that many people, including some in cognitive science, have a hard time getting their heads around. Pigliucci and O’Brien, unfortunately, think that the concept of consciousness is critical to the concept of understanding. Pigliucci said “My definition of understanding involves consciousness.&#8221. That assumption is not productive.

For one thing, consciousness is much more difficult to characterize than any other cognitive concept. There’s more controversy about “it” than any other topic in cognitive science. In fact, I would say it’s a non-concept, a red herring. It’s at best a general pointer, not something to be explained. As for the concept of understanding, Carl Bereiter has done the best job of analyzing this that I know of, cf: Education and Mind in the Knowledge Age. There he makes it clear that understanding is a relational concept, it is not directly about the content or state of one’s mind.

Unfortunately, it’s hard to sum up the relational concept of understanding. And Bereiter’s book is long and tough (but rewarding) to read. Sections 2.2 and 6.3 of my book Cognitive Productivity briefly present this concept in a new light.

Here’s a great paper on “consciousness” that shows how difficult it is to misunderstand “it”:
http://www.cs.bham.ac.uk/research/projects/cogaff/phenomenal-access-consciousness.pdf

(Sloman later qualified it.)

What Maggie Boden had to say about consciousness in relation to a theory of grief proposed by Wright, Sloman, and myself is germane to our problem.

theoretical psychologists in general sideline consciousness: either they ignore it entirely, or they take it for granted philosophically and ask about the conditions under which it appears or disappears. They are well advised to do so, since we do not yet understand this concept (more accurately: this mixed bag of related concepts) at the philosophical level. Even a journal whose title has “Philosophy” as its leading word need not insist that every time consciousness is mentioned it must be philosophically discussed.

That’s as true today as it was in 1996.

Pigliucci’s question above about whether Watson “is” intelligent is just a starting point. But let’s beware of such binary questions. The assumption that something is or is not intelligent is misleading. By this I do not mean that intelligence is a continuum. It’s not! (Its measurement by IQ tests might be, but that merely reflects an operationalization of human intelligence.) Rather there is a space of possible intelligences, with many discontinuities. That is, the space is both continuous and discrete. Watson may very well have some features that are human-like, and some that aren’t. A general AI project is to map out the space of possibilities.

The notion of a space of possible minds is not only useful for distinguishing humans from other species and artificial possibilities. Between humans there are structural differences (some minds have mechanisms that others don’t). Indeed within the span of their lives, individual humans change in qualitative ways. That is essentially what I argue in Cognitive Productivity: Much “learning” is like ontogenetic mental development, it’s about structurally changing our minds. For example, one needs to develop new “goal generators” as one learns potent concepts, such that one will be motivated to apply them when they are applicable. That’s a qualitative change.

Analog vs. digital computing, classical vs. other AI

O’Brien repeatedly claims that modelling mental phenomena with analog representations is superior to using digital ones. As is clear in Cognitive Productivity, by 1990 I was tired of the straw man arguments against “classical” AI. That was over twenty years ago. There are valid reasons for abstracting from neural and analog representations, just as there are valid reasons for modelling phenomena with analog representations. I don’t think the public gains much by cognitive scientists airing out quite old dirty laundry on the subject. So I won’t even bother to delve into the hoary distinctions.

So, let’s move on. It’s better to focus on particular models of mind. I describe a high level architecture in Part 2 of Cognitive Productivity. I hope it will help readers better understand themselves.

Nevertheless

While I disagreed with many of the assumptions in this episode, it’s a good one to learn from. Julia Galef and Massimo Pigliucci are always a treat to listen to.

References

Beaudoin, L.P. (2013) Cognitive Productivity: The Art and Science of Using Knowledge to Become Profoundly Effective. CogZest: Port Moody, BC.

Boden, M. A. (2008). Mind as machine: A history of cognitive science (2 volumes). Oxford University Press

Boden, M. A. (1996). Commentary on “Towards a Design-Based Analysis of Emotional Episodes.” Philosophy, Psychiatry & Psychology, 3(2), 135–136.

Galef, J., & Pigliucci, M. (2013, October 27). RS95 – Gerard O’Brien On the Computational Theory of Mind. New York City Skeptics. Retrieved from http://www.rationallyspeakingpodcast.org/show/rs95-gerard-obrien-on-the-computational-theory-of-mind.html

Sloman, A. (1993). Prospects for AI as the general science of intelligence. In A. Sloman, D. Hogg, G. Humphreys, D. Partridge, & A. Ramsay (Eds.), (pp. 1–10). Presented at AISB–1992 Prospects for Artificial Intelligence, Amsterdam: IOS Press. Retrieved from http://www.cs.bham.ac.uk/research/projects/cogaff/Aaron.Sloman_prospects.pdf

Sloman, A. (2010). Phenomenal and access consciousness and the “hard” problem: A view from the designer stance. International Journal of Machine Consciousness, 02(01), 117. doi:10.1142/S1793843010000424

Sloman, A. (2012). John McCarthy – Some Reminiscences (Extended version of Memoir in AISB Quarterly, No 132, Sept 2011 pp 7–10). Retrieved from http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-jmc-aisb.pdf

Somers, J. (2013, October 23). The man who would teach machines to think. The Atlantic. Retrieved November 5, 2013 from http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/

Published by

Luc P. Beaudoin

Head of CogZest. Author of Cognitive Productivity books. Co-founder of CogSci Apps Corp. Adjunct Professor of Education, Simon Fraser University. Why, Where, and What I Write. See About Me for more information.

One thought on “AI, Cognitive Science and Understanding: Comments on RS-95 (Gerard O’Brien and The Computational Theory of Mind)”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.