By Design
Patron
I’m going to try to sketch out, very briefly, an alternative picture of the mind, one that is more structure-preserving of our current understanding and suspicions about the brain. Much of this will be in very abstract language, so my apologies to the more technically-minded, but if it’s any consolation the abstractions here will be of the actual physical architecture in the brain that underwrites the whole thing. This is, essentially, the ‘embodied cognition’ thesis in a nutshell -- ie, facts about ‘the mind’ are really facts about ‘the brain’, and the taxonomy used to describe the one reduce to or collapse into the taxonomy of the other. To see that this is true, simply replace the words ‘concept’ and 'representation' with ‘neural population’ and see how much of the abstraction holds. Quite a bit of it, surprisingly. So here we go.
It’s looking increasingly likely that ‘the mind’ is an interconnected network of associated concepts and representations of the environment. That is to say that the internal architecture of the mind is one where representations and clusters of representations are strongly tethered to each other, some more so or less so than others. ‘Association’ here corresponds to connectivity strength.
Computation, on this construal, is the moment-to-moment transformation from one 'fixation' (or brain state, in concrete terms) to another within the concept space (the technical term sometimes used here is 'vector space'). This transformation is governed, in large part, by the connectivity strength between concepts. Concepts strongly associated with each other engage or invoke one another in a series of signal cascades through cortical layers.
Concepts that are deployed in tandem or frequently activated together become more strongly associated with use. In other words, the strength of connectivity between them is a dynamic function of their use, and can increase or decrease. In every day language, this is what ‘learning’ is. In more concrete language (say, a physicalist treatment where 'mind' and 'brain' are the same thing), the adjustment of connectivity strength between a layer of neurons is, literally, the activation of genes by messenger molecules that 'direct' the genes to increase the synaptic weight between certain associated neurons (in the form of increasing the density of transmitters and receptors). This is something that struck me as weird, the first time I saw it. You normally think of genes as being kind of inert, activated during the body-building phase of embryonic development and then forever done. But, as Eric Kandel showed in his Nobel-winning research, genes play THE critical role in the transduction of long-term memory.
The important thing to remember is that the above is not a description of some virtual machine (like, say, the virtual machine called Windows on your desktop). Memories and beliefs aren’t stored in a virtual ‘bank’ to be retrieved later. Concepts aren't some abtract grouping that exist in virtual space (although we describe them topologically).
Instead, memories are distributed across the synaptic weights of a population of units (neurons). ‘Beliefs’ turn out to be particular configurations of synaptic weights that modulate the pairing of stimuli and responsive behaviors. Concepts and representations turn out to be populations of neurons firing in synchrony at varying activation levels. Within the neural network paradigm, there is no meaningful distinction between architecture (the structure of the brain) and content (a specific memory or feeling or belief).
That should hopefully be enough to get started. I think there’s a real tension between this view and our common-sense intuitions about the mind (not to mention the Scn conception). I’ll try to go into any details yall want to explore, if I can.
It’s looking increasingly likely that ‘the mind’ is an interconnected network of associated concepts and representations of the environment. That is to say that the internal architecture of the mind is one where representations and clusters of representations are strongly tethered to each other, some more so or less so than others. ‘Association’ here corresponds to connectivity strength.
Computation, on this construal, is the moment-to-moment transformation from one 'fixation' (or brain state, in concrete terms) to another within the concept space (the technical term sometimes used here is 'vector space'). This transformation is governed, in large part, by the connectivity strength between concepts. Concepts strongly associated with each other engage or invoke one another in a series of signal cascades through cortical layers.
Concepts that are deployed in tandem or frequently activated together become more strongly associated with use. In other words, the strength of connectivity between them is a dynamic function of their use, and can increase or decrease. In every day language, this is what ‘learning’ is. In more concrete language (say, a physicalist treatment where 'mind' and 'brain' are the same thing), the adjustment of connectivity strength between a layer of neurons is, literally, the activation of genes by messenger molecules that 'direct' the genes to increase the synaptic weight between certain associated neurons (in the form of increasing the density of transmitters and receptors). This is something that struck me as weird, the first time I saw it. You normally think of genes as being kind of inert, activated during the body-building phase of embryonic development and then forever done. But, as Eric Kandel showed in his Nobel-winning research, genes play THE critical role in the transduction of long-term memory.
The important thing to remember is that the above is not a description of some virtual machine (like, say, the virtual machine called Windows on your desktop). Memories and beliefs aren’t stored in a virtual ‘bank’ to be retrieved later. Concepts aren't some abtract grouping that exist in virtual space (although we describe them topologically).
Instead, memories are distributed across the synaptic weights of a population of units (neurons). ‘Beliefs’ turn out to be particular configurations of synaptic weights that modulate the pairing of stimuli and responsive behaviors. Concepts and representations turn out to be populations of neurons firing in synchrony at varying activation levels. Within the neural network paradigm, there is no meaningful distinction between architecture (the structure of the brain) and content (a specific memory or feeling or belief).
That should hopefully be enough to get started. I think there’s a real tension between this view and our common-sense intuitions about the mind (not to mention the Scn conception). I’ll try to go into any details yall want to explore, if I can.
Last edited: