Mental hierarchy supported by graph theory

Abstract : The 1st part of the article summarizes the theory of mind called Stratium, the only one to date to fully explain consciousness even in its phenomenal aspect. If you already know it, jump straight to the 2nd part, which explains the nature of the new mathematical proof fleshing out graph theory. The supposed regularity of phase transitions in very large graphs is confirmed and allows predictions about their behavior. Part 3 explains how Stratium is strengthened as a model of the mind by this evidence. Phase transitions are the potential support for the symbolic coding of information by neural groups, as well as its hierarchical nesting, from 1st rank neurons to the workspace of consciousness.

Part 1: Stratium, consciousness explained

Stratium is a theory of mind that accounts for both neuroscientific investigations and philosophical phenomena. The brain is seen as a hierarchy of neural groups. Their synchronous excitations form symbolic networks of increasing depth in meaning. A “turned on” network at the base of the hierarchy produces a modest result in terms of complexity, simple reporting of regularities in sensory flows, despite the large number of neurons involved. While at the top a less extensive network generates the extraordinarily sophisticated experience of our awakened thoughts, by the richness and breadth of its connections.

Between base and top there are a multitude of levels, first responsible for diversifying the analysis of sensory data, then aggregating the results into multi-criteria representations of the environment. A set of coordinated points moving in the visual field, at the base of the Stratium, becomes at the top the face of an intimately known human being.

The geography of the Stratium is easy to draw at its base, the 1st rank networks being grouped anatomically. This is the classic description of the brain in functional areas. The networks at the top, on the contrary, are extended through the entire brain, difficult to pin down. Long connections are not “information transfers” between centers as between the modules of a computer. Excited together, they symbolize a sophisticated concept of conscious space.

Consciousness, a sudden appearance?

Why do these symbolic levels end up producing consciousness at the top when there is none at the base? Note that consciousness is not an on/off phenomenon. It is progressive, with a type of fusion of a multitude of contents. Depending on the state of awakening, our consciousness is vivid or lazy, extended or narrowed. It deals with unconscious propositions, whose origin can be partially traced (such thought comes to me from such a habit). It consists in descending into one’s own mental hierarchy.

But research is quickly blocked by the independence of conceptual levels. Impossible to feel our visual base assembling retinal points. We only feel the result. Go deeper? The only way is to form conscious abstractions, which model the functioning of our own brain as they do for other things. It is no longer a perception but a representation.

Failed science on the phenomenon

Why does a “consciousness phenomenon” appear, in the other direction, when the neural hierarchy adds up its symbolic levels to the top? Our consciousness is experienced without intermediary and does not need explanation to be, Philosophers are right to refuse to make it an illusion. But then our models must include an explanation for this experiment. Neuroscience is unfortunately failing.

The transition between two symbolic mental levels makes them two different entities. These are still neurons exchanging excitations, but the levels of information are virtually independent. How is this possible, without changing the physical medium?

Let’s reason without material substance

Let’s not forget that scientists have abandoned the notion of substance to describe physical reality. There is no such thing as a “neuron”, “molecule”, or “particle” other than as part of a level of information. The neural “substance” is an assembly of molecules, the molecules of particles; the particles are quantum field excitations, with no other way of grasping them than by their mathematical descriptions.

Nevertheless, the definition of substance persists in the independence that a level of information can have. When we palpate an object, our skin sensors record the pressure against the outer molecular layer of a material aggregated by solid covalent bonds. Several levels of information are involved: skin surface deforming against the object, cellular piezoelectric sensors, signal propagating along the sensory fiber, localization within the cortical stages creating the body pattern.

A phenomenon initiated between 2 levels of information

“I hold an object in my hand” requires the cooperation of many levels, or more precisely an surimposition of the levels, because the superiors can only occur on top of the previous ones. They create relative independence, merging qualitatively different information from previous groups. This independence makes one level the observer of precedents . It is here that I see the initiation of the phenomenon of consciousness, as described in detail in the book Surimposium. The complete consciousness we experience comes from the impressive number of levels surimposed by neurons in the brain, to the vast upper network called ‘global workspace’, which aggregates various functions, and is maintained during awakening thanks to excitatory nuclei.

Get an interview of neurons!

This reminder of the Stratium theory is necessary before addressing the exciting news that motivates this article. An essential part of the theory is missing, which can only be complemented by scientific research: how do neurons create virtual levels of information by exchanging excitations? How do they qualitatively stage concepts when receivers and signals are electro-chemically identical? Or: why does the stimulation of a neuron trigger the impression of a known face and another produces a tiny white dot in the visual field?

Stratium is a theory, that is, a teleological approach to the problem. It is an X-ray of reality that we can understand, but two things are missing to certify it: 1) Is it really mental reality that is X-rayed? 2) How is reality constituted in this way? Ontology is missing.

Part 2: Phase transitions in graphs

The ontological approach may have just made a dramatic leap with the mathematical proof of a phase transition in graphs. Graph theory is best suited today to explain how neural networks codify their information. It also serves as a model for the numerical simulation of these networks in AI.

A neural network can be represented as a graph whose vertices are the neural bodies and the connections the dendrites. Some dendrites fade and others form in a few minutes, it is a dynamic graph, in perpetual reshuffle. The number of ‘summits’ is one hundred billion, a field of unprecedented complexity. Not all vertices are connected, of course, so the number of different graphs/states of the brain is barely commensurable. How does an order emerge from such chaos?

Symbolic subsets

This is not conceivable without a quantization appearing in the giant graph. Subgraphs must be individualized, without being totally separated from the rest otherwise how would they exchange information to integrate and deepen them?

The individualization of subgraphs is not a question of geographical boundaries. It is their symbolism that is emancipated. That is to say, the graph n of the neurons thus grouped has the property Pn, which is unique to them. Pn is the concept symbolized by the excited n graph and no other.

Subgraphs fit together to form other, hierarchically superior ones. A higher graph “collects” its vertices among the neurons of high symbolic weight of the subgraphs. The higher the hierarchy, the longer the connections uniting the vertices, crossing the entire brain. But all this remains, on my part, a teleological approach. I make the observation of this organization. The operators are the neurons. Unlike IAs, no one programs them. How do they decide to associate in this way?

The brain is close to a random graph

The simplest random graph model was created in 1959 by Paul Erdös and Alfred Rényi. Take n vertices, draw a random edge between 2 vertices, then another, etc., until you have connected all the vertices. Each intermediate graph is part of the sequence G(n,m), where m is the number of edges drawn up to that point.

The analogy with the brain is easy: neurons are vertices that emit “random” extensions to their neighbors. Each state of the brain, over time, is therefore part of a sequence G(n,m), where n is the number of neurons and m is the number of dendrites. Three reservations to this analogy:
1) The growth of dendrites is not entirely random but influenced by local neurohormonal and excitatory activity.
2) Only axons extend to distant neurons, while dendrites only concern neighbors. Neurons naturally form subgraphs.
3) Edges between mathematical vertices do not necessarily have a direction. Between neurons they have one, with a low retro-propagation. This is important for the properties of graphs that we will see now.

The typical example of the random graph: water

The natural system best described by a random graph is water. It is a set of molecules connected by weak bonds, which form or break according to the distance of the molecules. Edges that are created or disappear. Do the graphs succeeding each other in the sequence G(n,m) of water all have the same properties? This is the case when the agitation of the molecules is moderate, between 0 and 100 ° C. The property of water is ‘liquid’. Under 0°C it becomes ‘solid’ and above 100°C ‘gaseous’. The transformation is sudden. A ‘phase transition’ is for a minimal variation of a parameter triggering a sudden physical change.

Application to neurons

The properties of the sets described by graphs are varied. In the case of neurons, the property of a group/subgraph in its excited mode is to symbolize a mental representation. Is this property undergoing phase transitions? This is a very rich hypothesis, as we will see. This would mean that the neural group symbolizes the same concept within a range of group configurations, but may suddenly switch to another meaning outside that range.

In more physiological terms for neurons, they would maintain the same overall relational symbolism as long as the stimuli they receive remain within their usual average. If these stimuli cease or change regularity, the symbolism of the group changes. Here we have the foundation of a metastable mental functioning, that is to say camped on one stability but likely to switch to another.

Mathematical modeling

Phase transitions in a random graph have been known since Erdös and Rényi. They showed that an abrupt transition occurs when m reaches n/2. When the number of edges reaches half the number of vertices appears a giant component, formed by the continuity of a large part of the edges. The addition of a few edges is enough to make it appear without it being able to be guessed just before. This phenomenon is also called ‘percolation’ in physics.

The property ‘giant component’ thus appears suddenly for a value of m/n between 0.49 and 0.51. But natural systems are less regular. They are formed of a multitude of subsets showing local properties whose addition does not necessarily make the expected general property. This makes it very difficult to find the formula for a global property based on the number of edges.

In 2006 Jeff Kahn and Gil Kalai proposed a conjecture on this subject, called “expectation threshold”. It consists in saying that by dividing the whole into subsets with the same property without overlapping too much, the global property becomes predictable. It is this conjecture that has just been confirmed mathematically by Jinyoung Park and Huy Tuan Pham, by a demonstration whose simplicity surprised their colleagues.

Part 3: Application to the mind

How does this proven conjecture constitute confirmation for the Stratium model of the mind? The rapprochement can certainly be surprising. Being able to derive a global property from locales is of interest only to an outside observer, and a mathematician to confirm the model. How are neurons “interested”, since they manage with each other?

The “expectation” of conjecture is also the “approximation” that one level of information makes over another, a principle at the heart of Stratium theory. This is the basis of an emergence: a level of information remains the same while its underlying constitution varies. The emerging level approximates the underlying level. The consequence is striking: the emerging level has a relative independence and is well observant of its own constitution: “I keep the same identity as long as I am formed of the states between G(n,m 1) and G(n,m 2). I change it or I disappear out of this range.”

Mental conjectures

Transpose this to the neural groups. Suppose they process the data of a fruit in the visual field. All states uniting their portions of curves to form a pretty sphere with depressions at both poles will lead to the overall property ‘apple’. If one of the depressions gradually reverses and becomes a cone lengthening the sphere, the upper neural level that observes these states changes its property and becomes ‘pear’. This moment is a mental phase transition.

Neurons in the upper level conjecture that the overall property is ‘apple’ or ‘pear’ from pieces of information in the lower level. ‘Apple’ and ‘pear’ are approximations since the switch is fast for minimal visual changes. One identity changes radically for another, because each approximates its content. Each identity is an attractor.

Mental phase transitions are highlighted in optical illusions where there are two equivalent ways of interpreting the image. Our brains easily switch from one response to another. Two approximations compete. 

Validate the approximation

How do the neurons of the upper group not make a mistake in their approximation? As we have seen, subsets have local properties (symbolisms in the case of neurons) that do not necessarily predict global property. That is, ‘sphere + 2 depressions’ is not always equal to ‘apple’. Especially if ‘sphere’ and ‘depression’ are themselves too random approximations.

The conjecture just demonstrated indicates that to limit the error of the approximation, the upper level must aggregate subsets with neighboring and non-overlapping properties. Translation for neurons: the signals received by the subsets are identified correctly (property sent to the upper group = “I am that”), and the subsets are well defined functionally. “Neighboring properties”, here, means ‘correct identification’ for all subsets, regardless of what they have identified. It is thus possible to assemble criteria of shape, color, etc.

Mind preserved by its environment

These conditions are met when the signals received have the same regularities over time. Our brain is programmed but also conserved by its environment. The usual stimuli keep active the symbolism of the neural subsets. It is on top of this stability that our superior concepts retain theirs.

And that they are not deceptive. The sequence of approximations, from one level of information to another, could lead the network to a completely different result. But its conclusions are reliable thanks to the fact that the transitions are relatively abrupt between the “phases” of the mental representations and that the delimitation of the subsets makes the approximation effective. Overall reliability is good when these conditions are met, as our mathematical proof has just demonstrated.

*

Phase transitions in random graphs: an unexpected proof (french) La Recherche July-September 2022

Leave a Comment