Understanding Ourselves and Each Other with Virtual Machine Concepts

There is a nexus of ideas that is relevant to the so-called “mind-body problem” and “consciousness”: that we can understand ourselves as a collection of interacting virtual machines. In this post, I’d like to convey some of the major features of virtual machines that make them interesting for understanding minds.

Some literature on the relevance of virtual machines

If you are philosophically inclined, here are some central ideas about virtual machines (“VM’s”) that should whet your appetite. They are from Aaron Sloman’s Architecture-Based Conceptions of Mind

  1. Functions in a virtual machine, such as deriving new information, making a plan, taking a decision, choosing a chess move, or multiplying two numbers are not physical functions.
  2. Virtual machine events and processes require a physical infrastructure. Some of the processes are implemented within the body, but not all, for instance when a true belief about a remote object becomes false without the agent knowing.
  3. Causal relationships can hold between virtual machine events.
  4. They can hold in either direction between physical and virtual machine events.
  5. This does not presuppose causal incompleteness at the physical level.
  6. The (software) engineering concept of ‘implementation’ and the philosophical concept of ‘supervenience’ are closely related.
  7. Sometimes the existence of a (working and switched on) physical machine guarantees the existence of a (working) virtual machine of a certain sort, and this is not just a matter of an arbitrary interpretation of the physical processes.
  8. Virtual states cannot be derived from physical states . i.e., although physical states, events, and processes can determine virtual machine phenomena, the latter are not logically or mathematically derivable from the former, if the concepts required to specify the virtual machine are not definable in terms of those of physics, and their laws of behaviour are not logically derivable from those of physics (even when supplemented with physical descriptions of the implementation machine).

No AI researcher has discussed VM’s more than Aaron Sloman. He was the first to recognize their deep significance, and he still writes about them to this date. For instance, I recommend his 2013 chapter with Ron Chrisley called Virtual Machines and Consciousness. It’s also extensively discussed in Sloman’s 2010 Phenomenal and Access Consciousness and the “Hard” Problem:
A View from the Designer Stance
.

See also What Cognitive Scientists Need to Know about Virtual Machines (2009)

Layering and emergence

One of the main reasons why VM’s are important for understanding the mind is to make sense of how mental processes can control physical processes. Various flavors of mind-body dualism have been proposed and refuted. Touted alternatives however are often oversimplified, e.g., eliminative materialism and physicalism, overlooking (and failing to illuminate) how complex heterarchies of virtual machines, with meaningful causation at and between layers, can account for our experience and competence.

Layering in systems biology

I touched on layering and virtual machines a few times in Cognitive Productivity. For example, in the introduction:

Neuroscientist Seth Grant defines systems biology as “a new branch of biology aimed at understanding biological complexity” (2003). Grant has identified eight interacting layers in the system to consider. The bottom layer is genetics and the top layer is behaviour. Synaptic connectivity is just one of the components of systems biology. Synapses themselves are now considered as complex computers (Grant, 2007). We can expect learning to happen at multiple layers and not to be faithfully approximated by any “hard wiring”. The mind itself must be considered as having multiple layers capable of learning.  Between the brain and behavior there are complex virtual machines—“the mind”.(Footnote 3) Mapping mental phenomena to brain mechanisms is a challenging task for scientists. As Stephen Pinker put it “Psychology, the analysis of mental software, will have to burrow a considerable way into the mountain before meeting the neurobiologists tunneling through from the other side.” (Pinker, 1999) 

Footnote 3:

Thus, multi-scale modeling of the brain must include virtual machines. See Sloman (2009a) for a description of the mind as a layered virtual machine that is itself layered on top of physical machines (themselves layered). The concept of layering is well understood in telecommunications (the Internet Protocol being one of several examples http://en.wikipedia.org/wiki/Internet_protocol_suite ) and computer software. However, it is still rarely explicitly invoked in relation to the mind. Yet to think in terms of “wiring” obscures the many layers at which learning may flexibly occur. Compare also Section 8-4 of Minsky (2006).

An example of emergence: from “emotion” to perturbance

Agnes Moors last year published a very interesting paper that is pertinent to the theme of this blog post, Integration of Two Skeptical Emotion Theories Dimensional Appraisal Theory and Russell’s Psychological Construction Theory. (Her paper received favourable peer commentary. The same year, she published a paper in Emotion Review on the same topic.) Her theory is “skeptical” in that it does not assert that emotions are mechanisms. Instead it views “emotions” as emerging from goal processing mechanisms.

Although not acknowledged in Moors’ papers, this view was first articulated by Aaron Sloman in 1981 in “You Don’t Need a Soft Skin to Have a Warm Heart Towards a Computational Analysis of Motives and Emotions”.. The latter is also one of the first papers in which Sloman articulated his theory of emotion. He submitted it to BBS in 1981, but it was unfortunately and IMO unwisely, rejected. It became the basis for an AI research program on cognition and affect, the “CogAff research program at the Universities of Sussex and Birmingham England (1981-2008), spawning numerous publications and Ph.D. theses (including my own). In 1992, (for similar skeptical reasons), I proposed the term perturbance to refer to this emergent phenomenon. (See for example our 1996 paper, “Towards a Design-Based Analysis of Emotional Episodes”).

An interesting question to ask is: how can the emergence of perturbance be understood in terms of virtual machines? I’m currently co-authoring a paper that discusses perturbance as a contributor to insomnia. While the paper is not philosophical, it illustrates that emergent phenomena can have effects. (For example, a process can detect thrashing in an operating system and take action.) This paper builds on a paper I published at AISB last year “Perturbance: Unifying Research on Emotion, Intrusive Mentation and Other Psychological Phenomena with AI”, which we will soon adapt and submit to Special Issue on IEEE (TAC) Transactions on Affective Computing special edition on ‘Computational modelling of emotion: theory and applications’ .

Why don’t we hear more about virtual machines in psychology and philosophy?

The concept of virtual machine shows up in philosophy, though rarely in detail with specific examples. In psychology, references to virtual machines is even scanter.

A reason virtual machines in psychology aren’t discussed more often, or in more depth, is that to deeply understand virtual machines, one needs to use (or have used) programming environments that not only provide a virtual machine but expose it in such a way that the programmer is invited to plant VM instructions. Poplog is one such environment. This system was developed specifically for doing and teaching AI. I had the pleasure of using it for my Ph.D. thesis project and a few subsequent projects. Java has a VM, but most users will never extend the language directly (change its syntax).

For every hour that a cognitive science student spends reading philosophers’ views about relation between mind and body, shouldn’t she spend 10 hours experimenting extensively with VM’s? Shouldn’t she be manipulating and trying to extend VM’s? Shouldn’t she be developing models with sophisticated AI systems like Poplog that provide VM functions?

Another reason why VM’s are not described more deeply is perhaps that currently fashionable (and in many respect useful) paradigms are not easily amenable to virtual machine programming: e.g., connectionism. The embodied/situated/etc. movements have given up on the hard but promising problems of virtual machinery.

Poplog documentation

Here’s some introductory documentation on virtual machines” “How a programming language is specified: virtual machines”. If you’re interested in virtual machines, you can search for the string “virtual machine” in that file.

Alas, it appears that Poplog is no longer actively maintained. The last update to the Free Poplog portal was in 2009. Still, I think there’s much to be learned from its documentation. And I hope it will be revived; or at least that someone will strive to make a system that has its many benefits.

Published by

Luc P. Beaudoin

Head of CogZest. Author of Cognitive Productivity books. Co-founder of CogSci Apps Corp. Adjunct Professor of Education, Simon Fraser University. Why, Where, and What I Write. See About Me for more information.

3 thoughts on “Understanding Ourselves and Each Other with Virtual Machine Concepts”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.