Following a lead this morning in my ongoing attempt to reverse engineer the brain’s sleep-onset control system (SOCS), I came across a lush collection of articles on the brain’s propensity to predict. This was in the Philosophical Transactions of the Royal Society of London (Series B, Biological Sciences, 364(1521)) published in 2009, when interest in prediction really took off in cognitive science. This is relevant to my research for several reasons, one of which is that the first postulate of the somnolent mentation theory states that “A decline in situational awareness, or sense making, including active, globally coherent mentation, is not merely a consequence of impending sleep, but is pro-somnolent.” At sleep onset, the mind’s propensity to make predictions begins to wane. Postulate 1 entails the SOCS can detect this state. It’s not yet clear to me how this would work. Consider for instance, as some of the papers in that issue point out,
- prediction and memory go together both neurally and functionally (Sir Frederick Bartlett held something similar),
- sleep onset is rich in memory processes,
- according to Moulton & Kosslyn (again, in that issue of the Philosophical Transactions) imagery is all about prediction —yet (critical to my theory), imagery occurs at sleep onset.
But that’s OK, for here we have a new problem to solve. I mention this not to delve into the design of the SOCS here, but to illustrate the main point of this post by claiming that “How to reverse engineer the SOCS” is a promising research problem, by which I mean it is a problem the pursuit of which can lead to deep, significant insights, whether or not my particular theory happens to survive the scrutiny of my experimental colleagues and other peers.
And that is what this brief blog post is about: promising problems. In science as in one’s mental life, these are the ones we need to be most sensitive to. We need to learn to detect them, record them for future consideration, assess them, and select amongst them, and pursue a precious few. They are things that at first we don’t understand, that we find particular, and sometimes amusing:
The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny….” -Isaac Asimov (as quoted by Kline, 2008).
Now, with yet another nod to our old friend M. C. Escher, who made a career out of cleverly exploiting our perceptual predictions, I’d like to quote Moshe Bar from the opening paper of the aforementioned issue of the Philosophical Transactions:
Let me conclude this introduction with an intriguing analogy. The fighter plane F–16 is the first aeroplane intentionally designed to have an aerodynamically unstable platform. This design was chosen to enhance the aircraft’s manoeuvrability. Most aeroplanes are designed to be stable such that they strive to return to their original attitude following an interruption. While such stability is a desired property for a passenger aeroplane, for example, it opposes a pilot’s effort to change headings rapidly and thus can degrade manoeuvring performance required for a fighter jet. This behaviour has led to a saying among pilots that ‘you do not fly an F–16, it flies you’ (http://en.wikipedia.org/ wiki/F–16_Fighting_Falcon). (p. 1182)
Productive problems are a bit like F–16’s too. You don’t fly them. You go where they take you.
1. Here’s an example, at the CogSci Conference last week, I had very lively discussions with Dr. Anna Belardinelli (Postdoc researcher, Cognitive Modeling, Department of Computer Science at Universität Tübingen) who is zestfully examining low level visual predictions using eye tracking equipment. She showed that an agent’s visual activity precisely predicts the motion of its hands. .5 to 2s before one grabs an object, one’s eyes have already fixated on the precise location where one will grasp or touch an object (or subject, presumably). Coincidentally, in the late 1980’s I read several documents by Claude Lamontagne who developed a very precise AI model of visual motion perception that included a low level predictive system. Lamontagne’s model predicted a new class of illusions, the “sigma effect”, in which smooth eye pursuit could be triggered in the absence of external motion, but with the illusion of real motion. Unfortunately, modern AI vision researchers have overlooked Lamontagne’s stunning thesis. Some of them believe David Marr was the first AI researcher on vision.
2 He goes on to say “As is evident from the collection of articles presented in this issue, the brain might be similarly flexible and ‘restless’ by default. This restlessness does not reflect random activity that is there merely for the sake of remaining active, but, instead, it reflects the ongoing generation of predictions, which relies on memory and enhances our interaction with and adjustment to the demanding environment.” I will have more to say about prediction in subsequent posts. For if we are to “use knowledge to become profoundly effective” we need to learn to predict with it (without explicitly having to solve mathematical equations.)
To conclude, I think there is only one way to do science: to meet a problem, to see its beauty and fall in love with it; to get married to it, and to live with it happily, till death do ye part – unless you should meet another and even more fascinating problem, or unless, indeed you should obtain a solution. But even if you do obtain a solution, you may then discover, to your delight, the existence of a whole family of enchanting though perhaps difficult problem children for whose welfare you may work, with a purpose, to the end of your days — Popper (1983, p. 8)
Bar, M. (2009). Predictions: a universal principle in the operation of the human brain. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364(1521), 1181–1182. http://doi.org/10.1098/rstb.2008.0321
Kline, R. B. (2008). Becoming a behavioral science researcher: A guide to producing research that matters.
Lamontagne, C. (1973). A new experimental paradigm for the investigation of the secondary system of human visual motion perception. Perception, 2, 167–180.
Lamontagne, C. (1976) Steps towards a computational theory of visual motion detection: Designing a working system. Unpublished doctoral dissertation, School of Artificial Intelligence, University of Edinburgh.
Lamontagne, C. & Howe, J. A. M. (1980). Towards a computational theory of visual motion perception: Macro-issues. Ghent: Communication and Cognition, 1980.
Moulton, S. T., & Kosslyn, S. M. (2009). Imagining predictions: Mental imagery as mental emulation. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364(1521), 1273–1280. http://doi.org/10.1098/rstb.2008.0314
Popper, K. R. (1983). Realism and the aim of science: From the postscript to the logic of scientific discovery. (W. W. Bartley III, Ed.). Totowa, NJ: Rowman and Littlefield.