How to Really Solve the Mind-Body Problem (6)

Abstract: Artificial neural networks construct a hierarchy of information. That digital circuits can experience their own complexity seems beyond the reach of our current paradigms. However, I show that above all they are programmed to avoid such autonomy and that the depth of information reached is minuscule compared to the brain. The example of the hydrocephalic brain shows that this depth is more important than the number of neurons, natural or artificial. I return to the scientificity of the study of consciousness by recalling that the procedure is to confirm the theory by the reality of the thing studied. Thus phenomenal consciousness must recognize itself in theory and not be eliminated from it. On the contrary, to conform to the scientific principle implies to prolong in the matter the search for the phenomenon, its teleological expression.

We have seen the summary of How to solve the mind-body problem, its criticisms, specified the level of explanation required by the solution, retained the opposition of physicalist and phenomenological looks, but nevertheless required the obligation of a unified reality, finally showed the interest of a new dimensional variety with two axes, horizontal and vertical complexion. We come to a difficult question: How does vertical complexity produce quality? For example, how do seemingly identical neural groups produce for some a simple phenomenon of light points on the retina, for others the face of the intimate and adored companion?

Does the complexity of AI only appear to its programmer or also to itself?

The answer to this question, essential to understand consciousness, points us towards artificial intelligence (AI). The neural networks are digitally simulated. It is indeed a hierarchy that is reproduced. Computer neurons are organized into layers, each processing the output of the previous one. Hence the names: deep learning, multilayer perceptron, convolutional network. The information at the output of the AI is more complex than at the input. At least for the programmer able to read it, because from a physical point of view they are electronic signals in the same binary language as at the input. The tricky question is: Is the output information more complex only for the programmer, or also for the last digital layer that created it?

Admitting that digital circuits can experience their own information seems beyond the reach of our contemporary paradigms. I will make two remarks on this subject: the first, ironic, is that we are able to attribute sensations to invisible entities, never encountered: the divinities. The second and more important remark is that we have before our eyes entities that are perfectly capable of this: neurons. Neurons experience the complex information they build; We are the direct experimenters. Are they made of a substance of a particular nature, different from anything else that would exist in reality? Not. They are made up of very common atoms. No incongruous property emanates from it. They are content to receive and propagate small quanta of energy.

Networks endowed with sensation

What experiences is not a solitary neuron but a network, that is, an entity defined by pure information. What is experienced is the transformation of data from the underlying level into integrated information at the proven level. Or the passage from one side of our metaphorical coin to the other. It’s exactly the same for computer neurons. That digital layers use a uniform electronic language makes it seem that the experience from one layer to another is similar, that none experiences anything other than transfers of electrical charges. But we know this is wrong for neurons, since similar electrochemical exchanges provide a different experience in the conscious network. It is therefore at least hasty, and probably wrong, to say that a level of digital information is not capable of feeling the one that precedes it.

Of course it would be just as hasty to say that AI is conscious. At least not before agreeing very precisely on the definition of ‘consciousness’. The term covers several concepts: body awareness, self-awareness, moral, awake, human, phenomenal. The first three relate to content, not the container; ‘awake’ and ‘human’ indicate that our consciousness recognizes itself as such only complete and in the image of our congeners, which does not prohibit other varieties. There remains what is unique, specific and unexplained: the phenomenon of consciousness, which we will take as a cardinal definition. The question becomes: is there a conscious phenomenon in an AI?

Does “our” consciousness sum up the phenomenon of consciousness?

To make the question realistic, let’s take an easier challenge: Do neural networks outside the global workspace (outside of awakened consciousness) experience a consciousness phenomenon? If we answer in the negative, we create an incomprehensible dualism within neurons, and therefore fundamental physics. It is not with neural re-entry mechanisms that we can explain a de novo phenomenon. These re-entries can be simulated numerically and fail to produce the desired phenomenon. They do not awaken it any more than in any physical system with re-entry —many automatisms use them without becoming conscious. What’s missing? What do the neurons of the global workspace have to produce such a phenomenon?

The answer, which you guess after the previous articles, is that these neurons are at the top of a considerable height of vertical complexion. It is not their number that counts, but the tiering of levels of information created by the networks.

Normal consciousness in hydrocephalic

Here is a remarkable example to confirm this hypothesis: Some brains are affected at birth by hydrocephalus at normal pressure. This dilation of the ventricles, inner lakes of cerebrospinal fluid, flattens the unfortunate brain against the skull and allows the formation of only about ten billion neurons, about 10% of the usual number. And yet consciousness is strictly normal!

The phenomenon formed without difficulty. It does not seem to depend on the number of neurons. Yet a solitary neuron is not enough to form a recognizable consciousness. How many at least is needed? Obviously this question is stupid. Rather, it is: What minimum degree of complexity must neural networks form for recognizable consciousness to appear? If the hydrocephalic brain forms a normal consciousness, it is because, most likely, the immense vertical complexion has formed normally, even if fewer neurons are present at each level.

The degree of integrated information

The theory that best accounts for the relationship between complexity and consciousness is the Integrated Information of Giulio Tononi and Christof Koch. Tononi equates the degree of Phi consciousness with the depth of neural information processing. I will discuss this theory in detail in another article, however it does not solve the problem of the phenomenon consciousness in isolation, not for those who do not see it in artificial intelligence, since the deepening of information processing is the same. Moreover, Koch, interviewed in Pour la Science (FR), thinks that an AI cannot access consciousness. Nothing surprising. It does not seem that associating more artificial neurons will create the phenomenon, since it is absent from already existing incredibly extensive systems, such as search and personalization engines.

However, thinking in this way always confines us to the purely horizontal dimension of complexity. Adding transistors to process more data doesn’t elevate any level to complexity, not any more than adding atoms to a rock makes it smarter. It is not surprising that a megacomputer, processing only a larger mass of data, is no more awake than our desktop machines. But what if instead of just collecting data, we asked these machines to integrate it more, and do it autonomously?

A voluntary limitation?

Currently the limitation of AIs is there. By continuing to increase the levels of computer neurons, instability appears. The simulated neurons are not equipped with the means of compensation that evolution has endowed their organic counterparts. Natural selection has patiently eliminated all the follies that unbridled complexity could produce. AI researchers are doing the same by testing algorithms one after the other. They suffer more contingencies than Nature, because they try to obtain a precise result, which suits them, far from the natural laissez-faire. What would we have to do with an inefficient or alienated AI, which would parasitize human society without providing it with any services?

What if their efforts succeed in creating a vertical complexion rich in hundreds of levels of information? Judging by what happens in real neurons, it seems inevitable that a consciousness of the same order as ours will appear. To think otherwise is to create again an inexplicable dualism between substances, whereas we do not know how to define them other than in common terms, those of information.

The hardness of the problem is not neuroscientific

If a consciousness is formed in future evolved AIs, will we have explained the phenomenon? After all, we can also grow a brain and see it become conscious. The phenomenon is not explained in itself, only generated by the height of vertical complexion. As in ‘mind=brain’, the equation ‘consciousness=vertical complexion’ conceals a correlation within the sign ‘=’. We don’t yet know, at this point, why one side of the coin becomes the other.

But we have come a long way. Instead of a huge coin with stack-side neurons and face-side consciousness, which are really hard or impossible for Chalmers to connect, we have a huge stack of these coins. Each is a level of information constitutive on one side, fusional on the other. The problem of consciousness becomes that of the nature of the whole of reality. Why are we at the same time quantons, atoms, molecules, cells, organs and finally human consciousness, to name only the main levels? At this point in our investigation, we extracted the brain-mental problem from neuroscience and referred it to the need for a general theory of reality, which science, religion and philosophy dispute.

Is the scientific principle well respected for consciousness?

Science prides itself on creating theories that are universal enough to predict phenomena never before observed. It succeeds. While philosophy produces remarkable analyses of the past but its predictions are fragile; when one philosopher falls right, many others have been mistaken. In fact, as all predictions have been supported it seems inevitable that one will prove right. Agreeing on a consensus is almost contradictory with the spirit of philosophy, which is to diversify ways of knowing, to build the cloud of errors in which the truth will necessarily be enclosed. Without designating it. Less power over the world than science which designates a truth until proven otherwise.

What provides evidence to the contrary, in science, is the level of information studied itself. Judicious principle: reality in person makes our eyes open. Let us apply this principle at the level of ‘human consciousness’. It experiences its properties as awakened consciousness. Would it be scientific to say that this is an illusion? On the contrary, it would be as unscientific as noting the precession of Mercury’s perihelion and keeping Newtonian mechanics rather than adopting the Einsteinian. The reality questioned is indeed human consciousness. The proponents of illusion must open their eyes.

Let’s not prolong this scientific error

Philosophy can make a sure prediction: the phenomenon of consciousness indicates that there is a dimension to reality that is not that of the spatio-temporal framework or any purely quantitative framework. This other dimension makes it possible to place qualitative phenomena in reality. This is the complex dimension: each increment of complexity produces new phenomena independent of the previous ones. This is not a dimension unique to human observation. Every living entity recognizes others by their properties, with or without a brain, and gets an impression from them. Without a clearly visible boundary between living and non-living, there is also no clear boundary to the intention, any limited scope to the teleological effort we experience.

To denigrate this teleological possibility in matter would be to extend to the rest of the complex dimension the scientific error we commit in treating consciousness as illusion. However, let us also not make the mistake of saying that these are phenomena of the same order, as some already do by equating animal impressions with humans. There is no ‘universal consciousness’, only those specific to each integrated set of information.

Phenomena arise from the crossing of complexity and are apparent only to an entity at least as complex as them. Indeed this entity must be equipped with identical properties to experience the phenomenon. When its self-organization has buried the phenomenon in its constitution, it must have recourse to assistants capable of experiencing it directly: they are the material instruments of humanity. We can assume that soon virtual instruments will allow us to explore the complexity of our mind.

Epilogue in sight

You have just followed the path leading to the theory of Stratium, which I will summarize in the next article. This theory affirms the existence of a qualitative transition: there are two sides to the coin. But what’s going on in the heart of the coin? How to model this transition? Can we do this in our preferred ontological language, mathematics? Yes, but on two conditions:
1) Limit the status of mathematics to that of language and not of constitution of reality, since it does not include phenomenal impressions.
2) Indicate when the sign ‘=’ separates two different dimensions, becomes ‘correlated to’, which indicates a crossing of complexity.

These crossings proceed from a universal principle that I will not explore here. I try to remain as unspeculative as possible. However, the great coherence of the mathematical language indicates that it is probably the same metaprinciple that would make it possible to link its different branches into metamathematics.

The investigation ends and gives way to the epilogue. The last two articles introduce Stratium, the solution to the riddle, and show its robustness in the face of criticism levelled at Humphrey for
How to solve the mind-body problem.



Leave a Comment