Paul Minda, a Canadian cognitive psychologist at University of Western Ontario, asked an interesting question on Twitter
Why do people pace around or engage in unguided, unfocused movement when talking on the phone. Does anyone know the answer?
I will focus mainly on a subset of this question, which is: why do we do this type of thing while highly cognitively engaged (e.g., participating in a cognitively demanding conversation, or lecturing).
I like to first try to answer a question myself (drawing as much as I can on my understanding of prior readings) before delving into others’ answers. So here are some “off the cuff” rambling reflections which expand on a series of my Twitter replies to Paul’s tweet. Keep in mind that I don’t specialize in cognitive embodiment. And the following is not rigorous reasoning. Just some (hopefully relevant) thoughts. But I am interested in all things relevant to cognitive productivity, which this is.
Later I might come back to the issue.
(Not yet proof read).
Public natural language evolved from an underlying generalized language
According to Aaron Sloman and Jackie Chappell, public language evolved from generalized languages (GLs), which are like public natural languages with two main differences:
(a) GLs are used for internal purposes (e.g. contents of perception, expressing goals, reasoning, planning, controlling actions, learning) rather than for communication with other individuals, and (b) they need not take linear formats nor be composed only of discrete elements. Note that human sign languages share the features in (b).
they note further
Requirements for a GL:
The form of representation should support structural variability
(unlike numbers, labels, or fixed-size arrays of numbers or labels);
It should support recombinability of representational components
(so that novel representations can be built);
It should support a version of compositional semantics
(generalised both to allow a wider variety of forms of composition and also to allow that the meanings of novel representations can be derived from meanings of components, the way the components are combined, and in addition aspects of the context — a factor not normally included in specifications of compositional semantics);
In cases where what is represented is something that represents or uses information, then the representation must be part of a system with meta-semantic competences, including the ability to handle referential opacity (which we conjecture requires special architectural extensions, rather than special notations, like modal logics).
This will include uses of GLs for self-monitoring, self-evaluation, self-control, etc.
Given my interest in purposive behavior, I’ve found this explanation of the evolution of public languages (i.e., that they presuppose rather than cause GLs) to be quite interesting. Key facts and claims are:
- Many other species have some version of GL as evidenced by the fact that they are able to plan actions (e.g., some crows, and other primates). So public languages are not required for GL’s. GLs came first.
- Being able to plan complex action sets the stage for interpreting others’ behavior (inferring a goal and plan requires being able to manipulate representations in a GL). The GL can be used to construct such interpretations (need not be a public language — many species can interpret each others behaviors, and sometimes human behavior too, beyond conditioning). Note that the agent whose actions are interpreted need not intend to communicate. Does not require a public language. But again a public language requires a GL.
- Planning can survive brain damage that affects verbal communication. There’s something evolutionarily very old (in humans) re GL capabilities.
- A GL can be used for gestural communication. Deaf children in community can develop and learn gesture quite readily.
- pre-linguistic children evince a GL through planned action and interpretation, before evincing a public language.
These facts and claims are relevant, but they are not organized in a rigorous argument.
A leap and some hand waving to address, if not answer, Paul Minda’s question: engaging in reflective thought activates some of the same underlying brain machinery that is involved in planning and executing physical actions, which can arouse it. (This is more specific than the preparation for action hypothesis in the next section.)
The above is not to be confused with the motor theory of speech perception (MTSP). That was the idea that in order to understand speech one has to reverse engineer the production of the speech and emulate it at some level in the brain / body. An important argument against the MTSP is that speech interpretation can remain intact in apraxia. (See Gregory Hickok’s Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans).
Preparation for action
Many theories of emotion posit that a major function of emotion is to prepare the agent for action. Some even posit that particular categories of emotion are mapped to particular (though abstract) actions. Nico Frijda for instance talked at length about emotions in terms of “action readiness”. There are many types of emotion theories (see [Moors(2017)](https://psycnet.apa.org/record/2017-09295-001 for a good review). While they differ with respect to how specifically they assume emotion primes action, they generally agree that emotions involve preparation for action. I deliberately used the vague phrase “involve” because an increasing number of affect theorists are skeptical about whether “emotion” is a helpful scientific category. (That is one of the reasons why in 1992 I proposed the term “perturbance” to designate an emergent phenomenon associated with what are at least colloquially called “emotions”.)
In any event, almost all affect psychologists (whether they believe emotion is a valid category or not) posit that affective states involve arousal (or tension) and pleasure. (Sometimes “potency” is thrown in for good measure.) And they claim the function of arousal is to prepare for action. For instance, James Russell, who is quite a skeptic about emotions, acknowledges in a 2009 paper (PDF here that
Perhaps one can also detect broad [autonomic nervous system] response patterns, mobilisation for action versus relaxation for homeostasis or perhaps preparation for approach versus avoidance (Larsen et al., 2008).
It may be that engaging in verbal communication, particularly when this communication is cognitively demanding, increases arousal. As a natural preparation for action, the agent begins to move.
(I personally have major reservations about the concepts of arousal in affect and insomnia, which will published in upcoming papers if all goes according to plan.)
Hypothesis: changing physical environment can stimulate creativity
There are supposedly “incubation effects” on creativity (cf. the Wikipedia entry. That research assumes that if , after being presented with a problem that requires some “creativity” to solve, you wait a while, then something interesting in your brain can happen which leads to the solution. Time is of the essence here. That is, creative incubation is assumed to be “temporal incubation”, which makes sense by analogy with other forms of incubation.
There’s an alternative (or additional) conjecture/interpretation of at least some incubation data, which we might call “environmental incubation” and which for a couple of decades I thought had been demonstrated. But when I started looking this up c. 2013, I couldn’t find anything on the topic. Well, to be more accurate, I vaguely recall in circa 2014 finding a paper on “environmental incubation” that was along these lines , but I couldn’t find the paper the following year and haven’t since. (That’s particularly unfortunate given that I’m into research information management). I’ve read papers that discuss effects of the environment on creativity, but that is not exactly what I’m talking about here. (If you know about it/them, then please let me know. It’s such an obvious idea, and it would be so easy to run an experiment on this that it must have been done. I’ve discussed with a creativity researcher doing some experiments on it.)
Here’s how the reasoning goes. If you are presented with a difficult problem in one environment and then try to solve it in that same environment, your thoughts about the problem become associated with the environment. Through associative conditioning, you would expect that if you later try to address the same problem in the same environment, the environment will maintain the activation of cognitive structures you had previously created in response to the problem. The environment, combined with your cognitive goal, are cues that prime your memories about the problem.
Priming these memories is fine if you were on the right track. But if you were on the wrong track, these memories would be unhelpful as they could crowd out other, more helpful, cognitions that are subliminally vying for your attention.
A tip would follow from this: To increase your likelihood of solving the problem, if you can’t solve the problem in said environment, work on the problem in a different physical setting. The more different the new environment, the less it will prime your prior misguided/fruitless thoughts. This would allow other cognitive structures to better compete for your attention (so that you can reason with/about them).
I don’t know about you, dear reader, but when I get stuck on a problem, one of my best strategies is to switch environments. That means physically moving to a different room, or even going for a walk. Actually, going for a walk in nature is one of my favourite strategies. (No time for an aside on attention restoration here. But whereas it is pertinent to our discussion, it is not exactly the same phenomenon.) I also find working in a different building (e.g., a library) or on a ferry boosts my creativity.
So, what appears to be temporal incubation might actually be environment incubation. (Of course, there are alternative explanations as to why switching environments might help.)
Getting back to Paul’s question. Perhaps walking while talking about something that is cognitively demanding enhances creativity because it involves a change in environmental stimulation. (NB: not only might walking change perception of external world, there are also internal changes, including proprioception, which may have effects via associate conditioning). If so, then perhaps (a) people have tacit knowledge of this fact, and/or (b) perhaps they have architecture-based motivation (but not even implicit knowledge). If (a) and/or (b), they would exploit the fact.
Also pertinent: in Daniel’s 1970’s paper on “Why the law of effect won’t go away”, he mentions the Latin verb cogito is apparently derived from Latin words that mean “to shake together”. Walking around is a way to shake things up cognitively.
Before signing off this section, I should mention that one of my pet peeves with psychology is that many key theoretical terms are very ineptly chosen. (When I’m particularly irritated by this, I think of it as “terminological obfuscation”.) Shakespeare was not altogether serious when he wrote “A rose by any other name…” Perhaps poor word choice reflects lack of training in conceptual analysis. In fact, I intend to publish a paper on this subject. I mention terminology because the expression “environmental incubation” which I have just used, is problematic. It’s short-hand for “environmental explanation of so-called-incubation effects”. But the concept lacks the key feature of incubation, which is time. So it shouldn’t really be called incubation at all.
Architecture-based motivation
In the previous section I alluded to architecture-based motivation, which I discuss in the first Cognitive Productivity book. Architecture-based motivation is basically motivation that is produced by the system that does not involve means-ends planning (or other forms of deliberative planning). It’s when internal/external events trigger a motive (it assumes the mind has motive generators.) For example, a child responds to toys by playing. She’s not necessarily trying to develop competence, though her play is there evolutionarily, because it has favored the development of competence. (Frijda discussed something similar to this in his 1986 emotion book.)
Whether or not the other conjectures in this post turn out to be helpful, the concept of architecture-based motivation is certainly very relevant to Paul’s question. The kind of moving around Paul mentioned is not normally deliberative. But it still may involve motives. For we are not forced to move around. That is to say that moving around is not just “stimulus-driven”, even if it is habitual; it is reactive motivated behavior. (Admittedly, not everyone would agree with me that such behavior involves goals. But most of them have not tried to design robots.)
Flip the question: why do we sit?
Another helpful way to address Paul’s question, I think, is to ask oneself: “Why do we sit?” And then “Why would we not be moving?”
Some answers that come to mind:
- to conserve energy,
- because it is safe or safer to do so, and
- it’s easier in group communication (beyond 2 or 3 people) if the participants stand still.
But if you don’t need that constraint, the default might actually be to move. While moving you can discover threats and opportunities. Some of those discoveries might help you solve the problem you’re talking/thinking about.
In this context we should keep in mind that our ancestors concerns were tied to their physical environment. They were concerned with physical resources, possibilities, problems, etc.
Pertinent to this reversal of Paul’s question is the notion of inhibition / self-regulation that someone in his twitter thread mentioned. That is, sitting may involve a lot of self-regulation. Cognitive intense conversations (or solo-problem solving) tax executive functions, making it more difficult to inhibit sitting.
Cultural influences
Cultures vary with respect to the average proximity of interlocutors, the amount of gesticulation, and perhaps their conversational walking patterns.
I am French-Canadian in a sea of English (North America). I don’t have scientific data on this, but my personal impression is that French Canadians tend to be more animated in their discussions. At house gatherings, the action is more likely to be happening in the kitchen, where people are standing up. When you’re standing up, you’re more likely to move. Culture might explain why people move less (why they inhibit moving).
Why work standing up?
My earlier blog post on working standing up has a section called “Why work standing up?” that is relevant to this discussion. I also discussed this in my first Cognitive Productivity book.
I like working standing up because it allows me to move around more. When I hit an impasse, I am much more likely to pace if I’m standing up than if I’m sitting in a chair.
More generally, the concept of impasse is relevant here. Kurt VanLehn’s Mind Bugs: The Origins of Procedural Misconceptions is an oldie that remains a goodie. How people detect and respond to impasses in problem solving is very important. Kurt modeled that in symbolic computer programs. And he worked with empirical psychologists to detect impasses in problem solving. I’d be very interested to know whether people are more likely to move around when they hit an impasse, and whether (natural or strategic) motion helps resolve impasses. (The discussion of creativity above suggests “yes”, but this is still in the realm of [informed] conjecture.)
The Reason You Walk by Wab Kinew
I don’t read nearly as much fiction as I would like to, which is a real shame given my Learning from Stories project. I partly make up for it by watching good films, and by deliberately thinking about and discussing the fiction I do “consume”. And I get some of my fiction second hand, including from my wife who is in a couple of book clubs. She kindly discusses her readings with me.
She and her book club recently read The Reason You Walk by Wab Kinew. Wab is a Canadian First Nations author. I’ve not read the book myself. And it’s not about Paul’s question. But I couldn’t help making and expressing a connection between the two. (An example of architecture-based motivation and negative self-regulation!)
Glenberg’s Indexical Hypothesis on What brains are for: Action, meaning, and reading comprehension
I will just vaguely point to Arthur Glenberg et al’s article What brains are for: Action, meaning, and reading comprehension which is relevant to this post. They write:
A tremendous amount of evidence is accumulating that supports the notion that linguistic meaning is based on the body’s perception and action systems, that is, the systems that derive and mesh affordances.
They argued that learning to read is facilitated by engaging in actions that demonstrate what is being read. Whether that has held up to later empirical scrutiny, I do not know. (I focus on adults.)
The theory points to what is (supposedly) helpful rather than to what people naturally do. However, it is always worth investigating whether animals have an evolutionary bias towards that which is helpful.
Perhaps motion is preparation for gesture, and gesture would help one better understand and convey meaning. Not irrelevant to conversation.
(Not unrelated to gesture are possible cognitive benefits of drawing The Surprisingly Powerful Influence of Drawing on Memory – Myra A. Fernandes, Jeffrey D. Wammes, Melissa E. Meade, 2018.) (I for one purchased an iPad Pro with Apple Pencil 2 last Nov in order to draw more, which I do believe is useful for comprehension and retention.))
Concluding remarks: Twitter as a debugging room
Cal Newport, author of Deep Work has argued that social media is incompatible with deep work. I agree with much of what Newport has to say. But I have also , on this blog, taken issue with many of Newport’s claims.
I have been informally trying to assess whether and how Twitter can be helpful for cognitive productivity. I use it as a playful medium. But I also use it as a knowledge resource discovery tool. I follow intelligent, knowledgeable, interesting experts in multiple domains. I keep an eye out for particularly interesting tweets. I occasionally blog in response to them, or at least give them some thought.
I treat Twitter as a bit of a “debugging room”, by analogy with the “debugging room” that existed in the heydey of Sussex University, where I started a D.Phil in 1990, which was at the time of the most vibrant schools of cognitive science in the world. (There were Aaron Sloman, Maggie Boden, David Young, Phil Agree, Andy Clark, and many other luminaries. And there were many bright students. I think there were 30+ Cognitive and Computing science Ph.D. students, and a big master’s program.) One could go sit in the debugging room and have coffee and conversations with great minds, on all kinds of topics.
Twitter’s much wider than a debugging room of course, and as such it has a lot of potential. Of course, most of what is tweeted is irrelevant. And Twitter needs to improve its filtering algorithms to better meet the needs of cognoscenti. (If you’re reading, Twitter, you can hire me as a consultant to help you do better on that front.) But I do not think it has potential as a net booster of cognitive productivity, if one uses it strategically, which is hard given it’s such a “firehose”.
Paul Minda‘s tweet was very interesting. The response above was just a preliminary collection of thoughts. But I’m sure I will keep thinking about the issue for years. And who knows what insights it might lead to.