Natural/artificial neurons in tandem
Neuroscientists are currently very busy refining neural/mental correlations. They are helped in this by analogies with artificial neural networks, especially SSL Self-Supervised Learning. It is very easy today to record the activity of hundreds of thousands of natural neurons in response to images or spoken stories. Activation sequences are obtained. If SSL algorithms are subjected to the same images and speech, they trigger analogous activation sequences between simulated neurons. The result is a simpler, fully computerized model of neural functioning.
There are many criticisms of this approach. The brain is organized into functional areas, algorithms are not. How does the algorithm know that one sector of simulation is dedicated to image analysis, another to speech?
This question is badly posed. It comes from an outside observer who categorizes the result. Neurons don’t work that way. They create their own categories. It is their relationships that define them. If a self-observation appears in the network —a simulation of self-awareness— it is integrated into the network and therefore intrinsically “knows” the symbolism of functions, without needing to be told that a particular area is intended for a particular function. Moreover, artificial neurons spontaneously reproduce the functional separation of brain areas. Some become specialized in certain tasks.
Perhaps the most interesting thing is that artificial neurons also reproduce the hierarchy of naturals. Self-supervised algorithms simulate the cortical hierarchy of language. Important validation for Stratium, the hierarchical theory of mind that I defend.
Criticism already more accurate…
Another criticism is that the complexity levels of natural neurons are much higher than those of artificial ones. Neuroscientists are debating the level of resolution needed for simulations to work effectively. Should we start only from the number of impulses (spikes), consider neurons and synapses as similar? Or should we take into account the great diversity of neuron types, rarely excited “dormant” synapses, backpropagation, neurohormonal variations, in short all the biological complexity of the brain?
One thing is certain: it is not necessary to include everything to get intelligence. Today’s SSLs, highly simplified neurons, achieve remarkable results, superior to the human brain in specific tasks.
And the crippling criticism
The most important criticism, which motivates this article, is that there is nothing qualitative in algorithms, no matter how sophisticated they are. SSLs have no soul, do not simulate any mental phenomenon, give no explanation, do not allow the slightest prediction about the possibility of consciousness of artificial networks. These phenomena escape them completely. There’s something missing. Is this something related to the higher complexity of natural neurons?
Even if we know how to relate categories of biological processes to particular properties and phenomena, we do not know better why, in the human brain, these proven phenomena are associated with these biological relationships. There is no better theory of the natural than the artificial. But at least we know that these phenomena exist, since we feel them! While SSLs don’t show the smallest clue. For these algorithms, the phenomenon of consciousness does not exist. The followers of the computational theory of mind are willingly illusionists —proponents of declaring consciousness an illusory epiphenomenon. The brain finds itself denigrating its own impressions.
Let’s put the illusion back in the right place
The disadvantage of this posture is enormous, untenable: our feelings lose all quality. They are no more than encrypted sequences. How to differentiate a murder story from the shopping list? These are simply two different groups of calculations in the mental sequence. The whole qualitative appearance of mental life is erased.
If a model obscures such an essential aspect of the properties of the mind, we can conclude that it is it that deserves the title of illusion. Computational theory is a mirage. It already lacked explanatory power to account for the notion of ‘substance’. It also lacks essential dimensions to understand our mental reality. It is a valuable tool. Let’s not make it a Theory of Everything.