Notes Regarding a Humanist Meeting on Reverse Engineering the Human Mind

This document contains preparatory materials for a humanist meeting on “Reverse engineering the human mind as a means to better understand ourselves and each other”. It will be held in late March 2021 (members and invited guests only).

Participants, may wish to first read the preamble which I ( Luc P. Beaudoin) published on 2021-03-23.

Schema activation exercises

Here are some “schema activation” exercises for participants to prepare for the meeting.

1. Analyzing pre-occupations

  1. Consider a current or recent experience of grief, fear, anger, resentment — preferably one you had difficulty ‘shaking’ (or that you would have difficulty shaking if you tried).
  2. Go for a walk to analyze the experience.
  3. Write down (journal) some of your intense pre-occupations. Draw a diagram if it helps (perhaps: try connecting nodes in the diagram).
  4. Now, ask yourself in evolutionary terms why it might be difficult for you to coldly put the preoccupying concern out of your mind. What evolutionary purposes might lack of control over consciousness have?
  5. reflect on your experiences meditating.
  6. What’s the right balance between complete mental control and obsession? ADHD? schizophrenia?

Time permitting, I will argue that define mental perturbance and argue that it is an unavoidable possibility of autonomous agents who have sufficiently complex deliberate and “meta-cognitive” competence. To do this, I will also need to briefly explain what I mean by an autonomous agent.

2. Analyzing insomnolence

If you’ve recently experienced difficulty falling asleep, reflect back on the thought patterns you engaged in before falling asleep. It’s best to do this immediately the next morning, because we quickly forget. (In fact, as I explained in Cognitive Productivity: Using Knowledge to Become Profoundly Effective, even within seconds we forget the little that we experience of our own information processing.)

Per the above, journaling, drawing and walking might help you better understand this aspect of your experience.

3. On limited attention

One of the remarkable characteristics of human minds is the apparent limited amount of information that can be held in short-term awareness, to use Merlin Donald’s term (from A Mind So Rare, a book I presented at a humanist meeting in 2014). A related concept is working memory. It’s not just the “content capacity” that is limited, but the number of consciousness threads of activity that can happen in parallel.

It seems to be mostly taken for granted, in folk psychology, cognitive science (including neuroscience) that such limitations (as difficult as they are to characterize) are contingent, a fluke of evolution. Or perhaps at best evolution’s tendency to be parsimonious. Yet looking at the cortex, with its enormous amounts of connections (where each connection is itself a ‘computer’), I am not struck by a sense of parsimony, but one of abundance. So: might there be good design reasons for “this” limitation?

So, a question you can ask yourself is:

why should there be such limits on short-term awareness?

In chapter 4 of my thesis I argued that there are in fact very good engineering reasons for these limitations. But don’t peak at my answers until you’ve spent a few days (or months) trying to answer the question yourself.

Does that mean that the futuristic AI-themed film, Her, was not realistic in its portrayal of Samantha as processing multiple threads in parallel? Not necessarily, because “she” (in Her) was not embodied. My chapter 4 argument hinged on embodiment.

In any event, this is yet another example of taking the designer stance to psychological questions.

Caveat/aside: not everyone agrees that there are limits in “attentional” information-processing (whatever that is), or what those limits are. For instance, Daniel Dennett in Consciousness Explained (sic), argues for multiple parallel drafts of processing and “heterophenomenology” [thank goodness for TextExpander since I can hardly type the word otherwise, if you will excuse this drafty thought surfacing].

4. The Goodness Paradox (book) and the psychology of norms

Our previous humanist meeting discussed the excellent book, The Goodness Paradox: The Strange Relationship Between Virtue and Violence in Human Evolution, by Richard Wrangham. The book cites Maciej Chudek and Joseph Henrich (of University of British Columbia) who define norm psychology as

a suite of cognitive mechanisms, motivations and dispositions for dealing with norms
It defines norms as

More generally Wrangham characterizes norm psychology as

learned behavioral standards shared and enforced by a community”; in other words, they are rules that everyone is expected to obey”

HOWEVER, rules as objective knowledge, “out there”, are only effective if they are implemented in internal “mindware” (beliefs, motivators, etc.).

Think of recent times when you were angry at someone who violated norms. Could evolution have designed you to be able to deliberately reject a norm of your choosing such that its violation did not trouble you?

Choose a norm that really matters to you. Now imagine that come hell or high water, you have to drop the norm for 24 hours despite witnessing its violations. How would that work out? Why?

5. A Dr. Jordan B. Peterson podcast episode

Dr. Peterson has published many lectures in psychology, several of which are aligned with an integrative design-oriented approach to autonomous agency.

  1. they integrate multiple aspects of psychology (such as cognition, affect, motivation, socialization, personality, development)
  2. they repeatedly return to evolutionary purposes and
  3. they focus on humans as agents acting competently in the world.
  4. They draw from and contribute to multiple levels of analysis (evolution, neuroscience, information processing, consciousness, sociology, philosophy and the entire realm of myth),
  5. they frequently allude to competence and capabilities as needing to be explained. (We research psychologists need to be curious about seemingly obvious competence and behavior, just as Newton was about falling objects.) Compare John McCarthy’s 2008 paper in the Journal of Artificial Intelligence: ‎The Well Designed Child and Sloman’s in the same issue: The well-designed young mathematician.
  6. they contain subtle references to reverse engineering — e.g., “if you think rats are simple try building one” in S4 E5: interview of Mark Manson , best-selling author of The Subtle Art of Not Giving a F*ck, and more recently, Everything Is F*cked.

Example JBP Podcast episode: Our Emotions and the Social Hierarchy

The features described above are evident in many episodes of the Jordan B. Peterson podcast series. Try this one for example: ‎ S2 E48: The Jordan B. Peterson Podcast: Our Emotions and the Social Hierarchy.

In the episode Dr. Peterson mentions Utilization behavior problems which illustrates what happens in some forms of pre-frontal cortex brain damage, where “self-evident” human competence breaks down, revealing that the “simple” competence was actually extremely complex. If you’ve known anyone who has had a stroke , head injury, brain cancer or genetic psychopathology you will be familiar with this.

One need not be a researcher to enjoy marveling about the capabilities described by Dr. Peterson (and by myself on the notes page).

It is not a coincidence that Dr. Peterson mentions the work of Norbert Wiener, Jeffrey Alan Gray and Hans J. Eysenck in this episode. This illustrates the interdisciplinary nature of IDO research.

With the integrative design-oriented approach, (which is a form of theoretical “Artificial” Intelligence) we marvel at, and try to cogently specify (as an engineer might), human capabilities that may appear self-evident. Then we create designs and simulations that try to realize the capabilities. As far as I know, Dr. Peterson has not proposed detailed designs with the aim of them being implemented in computer simulations. However, I claim that the information from his work can be used as inputs to requirements analysis and design in IDO R&D.

Perturbance in Her (film)

When I watched the film Her, mentioned above, I got the sense that the director must have read about insistence and perturbance , including Why Robots Will Have Emotions If I correctly recall, Samantha described herself as having difficulty keeping her mind off Theodore Twombly. Would an AI necessarily experience perturbance — i.e., as an emergent consequence rather than a designed feature?

Context and further readings

The links above provide context for this meeting. More technically:


Luc P. Beaudoin

revision history