What's new

An alternative theory of the mind

By Design

Patron
I’m going to try to sketch out, very briefly, an alternative picture of the mind, one that is more structure-preserving of our current understanding and suspicions about the brain. Much of this will be in very abstract language, so my apologies to the more technically-minded, but if it’s any consolation the abstractions here will be of the actual physical architecture in the brain that underwrites the whole thing. This is, essentially, the ‘embodied cognition’ thesis in a nutshell -- ie, facts about ‘the mind’ are really facts about ‘the brain’, and the taxonomy used to describe the one reduce to or collapse into the taxonomy of the other. To see that this is true, simply replace the words ‘concept’ and 'representation' with ‘neural population’ and see how much of the abstraction holds. Quite a bit of it, surprisingly. So here we go.

It’s looking increasingly likely that ‘the mind’ is an interconnected network of associated concepts and representations of the environment. That is to say that the internal architecture of the mind is one where representations and clusters of representations are strongly tethered to each other, some more so or less so than others. ‘Association’ here corresponds to connectivity strength.

Computation, on this construal, is the moment-to-moment transformation from one 'fixation' (or brain state, in concrete terms) to another within the concept space (the technical term sometimes used here is 'vector space'). This transformation is governed, in large part, by the connectivity strength between concepts. Concepts strongly associated with each other engage or invoke one another in a series of signal cascades through cortical layers.

Concepts that are deployed in tandem or frequently activated together become more strongly associated with use. In other words, the strength of connectivity between them is a dynamic function of their use, and can increase or decrease. In every day language, this is what ‘learning’ is. In more concrete language (say, a physicalist treatment where 'mind' and 'brain' are the same thing), the adjustment of connectivity strength between a layer of neurons is, literally, the activation of genes by messenger molecules that 'direct' the genes to increase the synaptic weight between certain associated neurons (in the form of increasing the density of transmitters and receptors). This is something that struck me as weird, the first time I saw it. You normally think of genes as being kind of inert, activated during the body-building phase of embryonic development and then forever done. But, as Eric Kandel showed in his Nobel-winning research, genes play THE critical role in the transduction of long-term memory.

The important thing to remember is that the above is not a description of some virtual machine (like, say, the virtual machine called Windows on your desktop). Memories and beliefs aren’t stored in a virtual ‘bank’ to be retrieved later. Concepts aren't some abtract grouping that exist in virtual space (although we describe them topologically).

Instead, memories are distributed across the synaptic weights of a population of units (neurons). ‘Beliefs’ turn out to be particular configurations of synaptic weights that modulate the pairing of stimuli and responsive behaviors. Concepts and representations turn out to be populations of neurons firing in synchrony at varying activation levels. Within the neural network paradigm, there is no meaningful distinction between architecture (the structure of the brain) and content (a specific memory or feeling or belief).

That should hopefully be enough to get started. I think there’s a real tension between this view and our common-sense intuitions about the mind (not to mention the Scn conception). I’ll try to go into any details yall want to explore, if I can.
 
Last edited:

namaste

Silver Meritorious Patron
Very interesting.
I am currently reading a book by Dr. Joe Dispenza called Evolve Your Brain.
It seems to compliment what you have described and vice versa.

Feel free to share some more.
 
... That should hopefully be enough to get started. I think there’s a real tension between this view and our common-sense intuitions about the mind (not to mention the Scn conception). I’ll try to go into any details yall want to explore, if I can.

No. The only 'tension' which arises is from your own enforcement of identification of the 'hardware layer' (neuron associations) with the 'software' (thoughts, mental patterns) and 'virtualized clouds' (distributed association of mental patterns) which arise in the mind.

The mind is that which we label our own subjective experience of ideas & associations. The idea that the mind arises directly from natural phenomena remains an unproved (and potentially an unprovable) hypothesis. As an hypothesis it has a certain appeal to those who seek a deterministic or 'objective' explanation for all experience, but being subjective this is unlikely to prove a fruitful approach beyond spinning off some interesting partial technologies which don't actually do that for which they are hyped.


Mark A. Baker
 

Infinite

Troublesome Internet Fringe Dweller
No. The only 'tension' which arises is from your own enforcement of identification of the 'hardware layer' (neuron associations) with the 'software' (thoughts, mental patterns) and 'virtualized clouds' (distributed association of mental patterns) which arise in the mind.

The mind is that which we label our own subjective experience of ideas & associations. The idea that the mind arises directly from natural phenomena remains an unproved (and potentially an unprovable) hypothesis. As an hypothesis it has a certain appeal to those who seek a deterministic or 'objective' explanation for all experience, but being subjective this is unlikely to prove a fruitful approach beyond spinning off some interesting partial technologies which don't actually do that for which they are hyped.


Mark A. Baker

Oh, I dunno. Compare the OP to, say, the "reactive mind" hypothesis. It comes complete with imaginary engrams which, it subsequently turns out, are the abberrated spiritual residue of aliens blown up 75 million years ago in volcanoes that didn't exist at that time. Now, on the surface that might appear "unlikely to prove a fruitful approach beyond spinning off some interesting partial technologies which don't actually do that for which they are hyped" - but you can certainly make a lotta dosh peddling it.
 

By Design

Patron
Programmer Guy, I'm not sure if that comment was directed at me or at others in this thread. If the latter, then feel free to skip the next section haha.

I don't think brain science is going to contradict my abstraction above. On the contrary, I think the two are consistent (this is what I meant in the OP by an alternative that structurally preserved the theory of the underlying physical substrate). Admittedly, my version was just a toy-version of the brain, watered down and leaving out some of the fascinating details, but it was just meant to be a superficial, wide-angled pass over the program. It's true, however, that neural networks (which our brains undoubtedly are) represent the world by means of very high-dimensional activation vectors (ie, by a pattern of activation levels across a huge population of neurons), that learning is a function of adjusting synaptic weights between both neurons and neural populations, and that memories and 'beliefs' are distributed across the 'weight space' of those connections. It's also true that the minute-by-minute vector transformations from one brain state to another is just a function of signal cascades through these populations of neurons. This happens when an activation vector from one population is projected through a large matrix of synaptic connections to produce a new activation vector across a second population of (nonlinear) neurons. Mathematically, this is an instance of multiplying a vector by a matrix and running the result through a nonlinear filter. You can run this process iteratively through any number of successive populations of neurons and, with appropriate adjustment of the synaptic weights that constitute the coefficients in the matrixes described above, such an arrangement can compute any function whatsoever.

Replace 'activation vectors' with 'concepts' or 'representations', and talk of 'synaptic weight adjustments' with talk of 'connectivity strength between concepts', and you just end up with my abstraction.

No. The only 'tension' which arises is from your own enforcement of identification of the 'hardware layer' (neuron associations) with the 'software' (thoughts, mental patterns) and 'virtualized clouds' (distributed association of mental patterns) which arise in the mind.

The mind is that which we label our own subjective experience of ideas & associations. The idea that the mind arises directly from natural phenomena remains an unproved (and potentially an unprovable) hypothesis. As an hypothesis it has a certain appeal to those who seek a deterministic or 'objective' explanation for all experience, but being subjective this is unlikely to prove a fruitful approach beyond spinning off some interesting partial technologies which don't actually do that for which they are hyped.


Mark A. Baker

It has more appeal than just fulfilling pretheoretical inclinations. As a body of data grows, the number of theories capable of explaining and predicting regularities within that domain dwindles. As this is the goal of a theory (explanitory and predictive power), claims about how theories ‘don’t actually do that for which they are hyped’ are actually empirical claims. Obviously the picture I painted above is just an abstraction of cognitive neuroscience, but is your claim that cog-sci is actually explanitorily/predictively deficient to some other rival theory, or that outright supernatural dualism is more tenable than some flavor of physicalism, or maybe something else? It's hard to tell from your short post.

As a sidenote, I’m not really using ‘mind’ here to mean just our introspective conceptual scheme of our own mental states. I’m using it to denote the ‘rules’ according to which those mental states are instantiated and manipulated. Positing another homunculus behind all the percepts doesn't do anything but push the explanation back a level. The goal is to explain the psychology of the homunculus. Obviously the homunculus isn't some kind of blank slate because blank slates, as Steven Pinker quips, just don't do anything!
 
Top