Guillaume Pourcel emailed me recently saying
I’m quite sure I spotted you in a video at SFU (recognize your voice and your ideas!): https://youtu.be/GGuBz63snLU?t=2733 (45:35). It’s a nice talk on the integration of connectionists and symbolic ideas w/ virtual machines, something I’m quite interested and my PhD advisor did some really nice work on in this area.
Guillaume and Alice Dauphin did an internship with me last year at CogSci Apps Corp. More on that internship [ here: 1].
Transcript of the Q & A exchange between Paul Smolensky and myself
Siri and I have transcribed my exchange with Smolensky. Or you can skip past the transcript to the subsequent section on why all of this matters.
Just to go back to the beginning of your talk , or the whole foundation of your talk, it was to say it is not a matter of “either connectionism representation or symbolic representation. We have two things going on here.” I’m wondering what you think of the idea of putting that idea on steroids— since clearly from your talk it’s quite powerful — to make sense of all the data … saying: it’s not really two levels of representation, or two levels of mechanisms . We are probably dealing with a hierarchy of levels or a collection of hierarchies of levels.
For instance, if you look at the McCulloch and Pitts neurons [they are] quite simple. Connectionist neurons improved on that and are a little bit more complex. Now […] Seth Grant is saying that synapses themselves are being viewed as mini [perhaps super] computers. So even within a [single] neuron there’s computation going on. If you look at computer programs, you see that the concept of stack comes up again and again: you have virtual machines that are embedded in virtual machines that are in embedded in physical machines.
So what do you think of the idea of taking your idea into hyper space and blowing this up into multiple levels?
Paul Smolensky’s reply:
I think it’s good. I think it’s right. So the idea that there are only two levels of organization in the mind-brain is clearly far from the truth. So, if there are ways of understanding the kind of computation that is being done at other levels, such that you can connect them to the higher levels . If we can understand the kind of computer that a synapse is in a way that’s going to allow us to build higher level virtual machines that do cognition, then that’s great. Of course, it becomes harder and harder to do that. As the machine you are looking at gets more and more molecular or it gets complex in other ways it’s hard to understand it well enough to see how you could use it as the basis for building a higher level virtual machine that’s useful for something.
My followup question: at 47:42.
It’s interesting because that’s sounds like the arguments that people used to make against connectionism and dynamic models: “it’s complicated. How are you going to make sense of all of that?”
Paul Smolensky’s reply:
And the answer is: “here’s the way to do it.” You were right. We didn’t know before. Now we have an idea. And the more of those we get the more we can do that.
My reply: at 2884 seconds :
thank you.
The subsequent Q & A (about compositionality), and indeed the entire talk is also quite interesting.
Why does this matter?
1: The fact of layering destroys mind-body monism and dualism
This topic may seem esoteric. But it’s actually of huge significance because the concept of layer can be used to debunk theories of mind-brain monism to which some philosophers still adhere. The brain is not merely a one-layer machine.
The concept of layering also destroys dualism, but most cognitive scientists knew dualism was dead anyway. The mind-brain can only be understood as a complex machine with multiple layers. Smolensky himself admits that even two layers is not enough. The concept of layering is more general and useful set of notions than either monism or dualism.
2. Can’t understand ‘cognition’ without understanding layers and virtual machinery
The idea that the brain hosts virtual machines is itself of profound significance. Software engineers are familiar with the idea that software and communication protocols like TCP/IP are layered. But few of them , and few philosophers & psychologists, realize: so is the human mind. If we are to understand the mind we have to understand technical concepts of layering.
I discussed this briefly in Cognitive Productivity: Using Knowledge to Become Profoundly Effective. Search for “Seth Grant” and “virtual machine” in there.
3: Layering involves discontinuities
Understanding layering is also a helpful way to understand the eponymous concept of my (perpetually) next book, Discontinuities: Love, Art, Mind.
4: What goes around, comes around
In the 1980s, (many) connectionists claimed that all brain computation could (and should be) explained in the statistical terms of neural nets. In his talk, Paul Smolensky discussed the need for hybrid models. In the QA I essentially pointed out that
- Seth Grant has argued that synaptic junctures are essentially sophisticated computers (super-computers, Grant claims),
- this is essentially sub-connectionist computation
- there are many layers in virtual machines;
- it seems the brain contains multiple manifold hierarchies from synaptic computation up to the highest levels of virtual machinery.
So the reductionists’ arguments that connectionists originally made against symbolic modelers (“sub-symbolic computation” is more “real” than symbolic computation) would apply to connectionist modeling too.
Smolensky acknowledges the problem and concludes essentially by saying that we need to simplify: i.e., use simpler tools until we figure out how to work with additional complexity, which is essentially what many of us have been saying to connectionists in the 1980s.
About Paul Smolensky
Paul Smolensky […] is Krieger-Eisenhower Professor of Cognitive Science at the Johns Hopkins University and a Partner Researcher at Microsoft Research, Redmond Washington.
See Paul Smolensky – Wikipedia.
Historical aside: a semantically connected “Discovering Cognitive Science” presentation at SFU
I guess Paul Tupper of SFU invited Paul Smolensky to give a talk at SFU because Paul Tupper himself has attempted to fully integrate Smolensky and Legendre’s The Harmonic Mind| The MIT Press in his work with John Alderete and Stefan A. Frisch using Smolensky’s concepts. He had given a talk on 2010-10-27 in the same series, according to my notes (which I was able to access in accordance with the 2s rule of Cognitive Productivity 😊 ). Its abstract:
In this talk, we develop a connectionist model of learning phonotactics and apply it to the problem of learning root cooccurrence restrictions in Arabic. In particular, a multilayer network with a hidden layer is trained on a representative sample of actual Arabic roots using error-corrective backpropagation. The trained network is shown to classify actual and novel Arabic roots in ways that are qualitatively parallel to psycholinguistic studies of Arabic. Statistical analysis of network performance also shows that activations of nodes in the hidden layer correspond well with violations of symbolic well-formedness constraints familiar from generative phonology. The larger finding is therefore that a sub-symbolic phonotactic system trained from scratch can mirror the behavior of a symbolic-computational system typical of contemporary phonological analysis.
See their subsequent paper: Alderete, Tupper & Frisch (2011) Phonotactic learning without a priori constraints: Arabic root cooccurrence restrictions revisited (PDF)
Footnotes
- 1: Guillaume and Alice Dauphin did an internship with me last year at CogSci Apps Corp., as part of their masters in Cognitive Engineering in France. The problems Guillaume and Alice worked on at CogSci Apps were very interesting, and very theoretically fundamental. Their research projects were so ambitious as to be unlikely to lead to practical applications any time soon. And that’s OK: pure theoretical research is important. Nevertheless, one never knows: pure research can lead to all kinds of unexpected applications! Guillaume is now doing a Ph.D. at Groningen University, continuing some of what he addressed at CogSci Apps Corp.
Revision history
- 2021-07-18 21:55- 22:18: Fixed several typos, tightened up the text and removed a section.