This document contains preparatory materials for a humanist meeting on “Reverse engineering the human mind as a means to better understand ourselves and each other”. It will be held in late March 2021 (members and invited guests only).
Participants, may wish to first read the preamble which I ( Luc P. Beaudoin) published on 2021-03-23.
Schema activation exercises
Here are some “schema activation” exercises for participants to prepare for the meeting.
1. Analyzing pre-occupations
- Consider a current or recent experience of grief, fear, anger, resentment — preferably one you had difficulty ‘shaking’ (or that you would have difficulty shaking if you tried).
- Go for a walk to analyze the experience.
- Write down (journal) some of your intense pre-occupations. Draw a diagram if it helps (perhaps: try connecting nodes in the diagram).
- Now, ask yourself in evolutionary terms why it might be difficult for you to coldly put the preoccupying concern out of your mind. What evolutionary purposes might lack of control over consciousness have?
- reflect on your experiences meditating.
- What’s the right balance between complete mental control and obsession? ADHD? schizophrenia?
Time permitting, I will argue that define mental perturbance and argue that it is an unavoidable possibility of autonomous agents who have sufficiently complex deliberate and “meta-cognitive” competence. To do this, I will also need to briefly explain what I mean by an autonomous agent.
2. Analyzing insomnolence
If you’ve recently experienced difficulty falling asleep, reflect back on the thought patterns you engaged in before falling asleep. It’s best to do this immediately the next morning, because we quickly forget. (In fact, as I explained in Cognitive Productivity: Using Knowledge to Become Profoundly Effective, even within seconds we forget the little that we experience of our own information processing.)
Per the above, journaling, drawing and walking might help you better understand this aspect of your experience.
3. On limited attention
One of the remarkable characteristics of human minds is the apparent limited amount of information that can be held in short-term awareness, to use Merlin Donald’s term (from A Mind So Rare, a book I presented at a humanist meeting in 2014). A related concept is working memory. It’s not just the “content capacity” that is limited, but the number of consciousness threads of activity that can happen in parallel.
It seems to be mostly taken for granted, in folk psychology, cognitive science (including neuroscience) that such limitations (as difficult as they are to characterize) are contingent, a fluke of evolution. Or perhaps at best evolution’s tendency to be parsimonious. Yet looking at the cortex, with its enormous amounts of connections (where each connection is itself a ‘computer’), I am not struck by a sense of parsimony, but one of abundance. So: might there be good design reasons for “this” limitation?
So, a question you can ask yourself is:
why should there be such limits on short-term awareness?
In chapter 4 of my thesis I argued that there are in fact very good engineering reasons for these limitations. But don’t peak at my answers until you’ve spent a few days (or months) trying to answer the question yourself.
Does that mean that the futuristic AI-themed film, Her, was not realistic in its portrayal of Samantha as processing multiple threads in parallel? Not necessarily, because “she” (in Her) was not embodied. My chapter 4 argument hinged on embodiment.
In any event, this is yet another example of taking the designer stance to psychological questions.
Caveat/aside: not everyone agrees that there are limits in “attentional” information-processing (whatever that is), or what those limits are. For instance, Daniel Dennett in Consciousness Explained (sic), argues for multiple parallel drafts of processing and “heterophenomenology” [thank goodness for TextExpander since I can hardly type the word otherwise, if you will excuse this drafty thought surfacing].
4. The Goodness Paradox (book) and the psychology of norms
Our previous humanist meeting discussed the excellent book, The Goodness Paradox: The Strange Relationship Between Virtue and Violence in Human Evolution, by Richard Wrangham. The book cites Maciej Chudek and Joseph Henrich (of University of British Columbia) who define norm psychology as
a suite of cognitive mechanisms, motivations and dispositions for dealing with norms
It defines norms as
More generally Wrangham characterizes norm psychology as
learned behavioral standards shared and enforced by a community”; in other words, they are rules that everyone is expected to obey”
Think of recent times when you were angry at someone who violated norms. Could evolution have designed you to be able to deliberately reject a norm of your choosing such that its violation did not trouble you?
Choose a norm that really matters to you. Now imagine that come hell or high water, you have to drop the norm for 24 hours despite witnessing its violations. How would that work out? Why?
5. A Dr. Jordan B. Peterson podcast episode
Dr. Peterson has published many lectures in psychology, several of which are aligned with an integrative design-oriented approach to autonomous agency.
- they integrate multiple aspects of psychology (such as cognition, affect, motivation, socialization, personality, development)
- they repeatedly return to evolutionary purposes and
- they focus on humans as agents acting competently in the world.
- They draw from and contribute to multiple levels of analysis (evolution, neuroscience, information processing, consciousness, sociology, philosophy and the entire realm of myth),
- they frequently allude to competence and capabilities as needing to be explained. (We research psychologists need to be curious about seemingly obvious competence and behavior, just as Newton was about falling objects.) Compare John McCarthy’s 2008 paper in the Journal of Artificial Intelligence: The Well Designed Child and Sloman’s in the same issue: The well-designed young mathematician.
- they contain subtle references to reverse engineering — e.g., “if you think rats are simple try building one” in S4 E5: interview of Mark Manson , best-selling author of
The Subtle Art of Not Giving a F*ck, and more recently, Everything Is F*cked.
Example JBP Podcast episode: Our Emotions and the Social Hierarchy
The features described above are evident in many episodes of the Jordan B. Peterson podcast series. Try this one for example: S2 E48: The Jordan B. Peterson Podcast: Our Emotions and the Social Hierarchy.
In the episode Dr. Peterson mentions Utilization behavior problems which illustrates what happens in some forms of pre-frontal cortex brain damage, where “self-evident” human competence breaks down, revealing that the “simple” competence was actually extremely complex. If you’ve known anyone who has had a stroke , head injury, brain cancer or genetic psychopathology you will be familiar with this.
One need not be a researcher to enjoy marveling about the capabilities described by Dr. Peterson (and by myself on the notes page).
With the integrative design-oriented approach, (which is a form of theoretical “Artificial” Intelligence) we marvel at, and try to cogently specify (as an engineer might), human capabilities that may appear self-evident. Then we create designs and simulations that try to realize the capabilities. As far as I know, Dr. Peterson has not proposed detailed designs with the aim of them being implemented in computer simulations. However, I claim that the information from his work can be used as inputs to requirements analysis and design in IDO R&D.
Perturbance in Her (film)
When I watched the film Her, mentioned above, I got the sense that the director must have read about insistence and perturbance , including Why Robots Will Have Emotions Academia.edu. If I correctly recall, Samantha described herself as having difficulty keeping her mind off Theodore Twombly. Would an AI necessarily experience perturbance — i.e., as an emergent consequence rather than a designed feature?
Context and further readings
The links above provide context for this meeting. More technically:
- I first articulated something resembling the IDO approach during my Ph.D. years in Cognitive Science at Sussex University and University of Birmingham (England) (Goal Processing in Autonomous Agents).
- In my first Cognitive Productivity book, I referred to it as “broad cognitive science”. I now realize that “integrative design-oriented research”, is a better label.
- Our recent paper on Mental perturbance: An integrative design-oriented concept for understanding repetitive thought, emotions and related phenomena involving a loss of control of executive functions explained and applied the IDO approach.
- Culture, the Humanities, and the Collapse of the Grand Narratives – Quillette. Theoretical AI is a new contribution to the grand narrative about humanity. It does not replace the grand narratives. It needs to be understood in their terms, and the grand narratives need to respond to it. For example, theoretical AI researchers can take that stance that mind is largely innately determined or that ability is largely learnt; for AI to be successful, I believe we need to understand both and their interplay. I have been arguing since I was an undergraduate (late 1980s) against radical connectionists and radical associationist psychologists who underestimate the importance of innate mechanisms. (I presented a BPS poster on the topic in 1991. See also the Rebooting AI (book) by Gary F. Marcus and Ernest Davis.)
Luc P. Beaudoin
- 2021-03-26. Extended the Jordan B. Peterson section.
- 2021-03-28. Added a bullet on Culture, the Humanities, and the Collapse of the Grand Narratives – Quillette, and some words about mental perturbance.